Artificial Neural Networks and Fault Injection Attacks
- URL: http://arxiv.org/abs/2008.07072v2
- Date: Thu, 11 Feb 2021 16:27:03 GMT
- Title: Artificial Neural Networks and Fault Injection Attacks
- Authors: Shahin Tajik and Fatemeh Ganji
- Abstract summary: This chapter is on the security assessment of artificial intelligence (AI) and neural network (NN) accelerators in the face of fault injection attacks.
It discusses the assets on these platforms and compares them with ones known and well-studied in the field of cryptographic systems.
- Score: 7.601937548486356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This chapter is on the security assessment of artificial intelligence (AI)
and neural network (NN) accelerators in the face of fault injection attacks.
More specifically, it discusses the assets on these platforms and compares them
with ones known and well-studied in the field of cryptographic systems. This is
a crucial step that must be taken in order to define the threat models
precisely. With respect to that, fault attacks mounted on NNs and AI
accelerators are explored.
Related papers
- Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects [0.33554367023486936]
This paper provides a comprehensive review of machine learning-based Network Intrusion Detection Systems (NIDS)
We critically examine existing research in NIDS, highlighting key trends, strengths, and limitations.
We discuss emerging challenges in the field and offer insights for the development of more robust and resilient NIDS.
arXiv Detail & Related papers (2024-09-27T13:27:29Z) - A Synergistic Approach In Network Intrusion Detection By Neurosymbolic AI [6.315966022962632]
This paper explores the potential of incorporating Neurosymbolic Artificial Intelligence (NSAI) into Network Intrusion Detection Systems (NIDS)
NSAI combines deep learning's data-driven strengths with symbolic AI's logical reasoning to tackle the dynamic challenges in cybersecurity.
The inclusion of NSAI in NIDS marks potential advancements in both the detection and interpretation of intricate network threats.
arXiv Detail & Related papers (2024-06-03T02:24:01Z) - Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
Fault-Injection Attacks [14.958919450708157]
We first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems.
We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs.
arXiv Detail & Related papers (2021-05-05T08:11:03Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips [11.872768663147776]
Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems.
While these systems are going mainstream, they have inherent security and reliability issues.
We propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues.
arXiv Detail & Related papers (2020-05-16T16:54:00Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.