Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
- URL: http://arxiv.org/abs/2110.11290v1
- Date: Thu, 21 Oct 2021 17:18:52 GMT
- Title: Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
- Authors: Maria M\'endez Real, Rub\'en Salvador
- Abstract summary: Neural Networks (NN) are expected to become ubiquitous in IoT systems by transforming all sorts of real-world applications.
embedded NN implementations are vulnerable to Side-Channel Analysis (SCA) attacks.
This paper surveys state-of-the-art physical SCA attacks relative to the implementation of embedded NNs on micro-controllers and FPGAs.
- Score: 0.32634122554913997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: During the last decade, Deep Neural Networks (DNN) have progressively been
integrated on all types of platforms, from data centers to embedded systems
including low-power processors and, recently, FPGAs. Neural Networks (NN) are
expected to become ubiquitous in IoT systems by transforming all sorts of
real-world applications, including applications in the safety-critical and
security-sensitive domains. However, the underlying hardware security
vulnerabilities of embedded NN implementations remain unaddressed. In
particular, embedded DNN implementations are vulnerable to Side-Channel
Analysis (SCA) attacks, which are especially important in the IoT and edge
computing contexts where an attacker can usually gain physical access to the
targeted device. A research field has therefore emerged and is rapidly growing
in terms of the use of SCA including timing, electromagnetic attacks and power
attacks to target NN embedded implementations. Since 2018, research papers have
shown that SCA enables an attacker to recover inference models architectures
and parameters, to expose industrial IP and endangers data confidentiality and
privacy. Without a complete review of this emerging field in the literature so
far, this paper surveys state-of-the-art physical SCA attacks relative to the
implementation of embedded DNNs on micro-controllers and FPGAs in order to
provide a thorough analysis on the current landscape. It provides a taxonomy
and a detailed classification of current attacks. It first discusses mitigation
techniques and then provides insights for future research leads.
Related papers
- Lightweight CNN-BiLSTM based Intrusion Detection Systems for Resource-Constrained IoT Devices [38.16309790239142]
Intrusion Detection Systems (IDSs) have played a significant role in detecting and preventing cyber-attacks within traditional computing systems.
The limited computational resources available on Internet of Things (IoT) devices make it challenging to deploy conventional computing-based IDSs.
We propose a hybrid CNN architecture composed of a lightweight CNN and bidirectional LSTM (BiLSTM) to enhance the performance of IDS on the UNSW-NB15 dataset.
arXiv Detail & Related papers (2024-06-04T20:36:21Z) - SpikingJET: Enhancing Fault Injection for Fully and Convolutional Spiking Neural Networks [37.89720165358964]
SpikingJET is a novel fault injector designed specifically for fully connected and convolutional Spiking Neural Networks (SNNs)
Our work underscores the critical need to evaluate the resilience of SNNs to hardware faults, considering their growing prominence in real-world applications.
arXiv Detail & Related papers (2024-03-30T14:51:01Z) - Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - Classification of cyber attacks on IoT and ubiquitous computing devices [49.1574468325115]
This paper provides a classification of IoT malware.
Major targets and used exploits for attacks are identified and referred to the specific malware.
The majority of current IoT attacks continue to be of comparably low effort and level of sophistication and could be mitigated by existing technical measures.
arXiv Detail & Related papers (2023-12-01T16:10:43Z) - Is there a Trojan! : Literature survey and critical evaluation of the
latest ML based modern intrusion detection systems in IoT environments [0.0]
IoT as a domain has grown so much in the last few years that it rivals that of the mobile network environments in terms of data volumes as well as cybersecurity threats.
The confidentiality and privacy of data within IoT environments have become very important areas of security research within the last few years.
More and more security experts are interested in designing robust IDS systems to protect IoT environments as a supplement to the more traditional security methods.
arXiv Detail & Related papers (2023-06-14T08:48:46Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - An Overview of Backdoor Attacks Against Deep Neural Networks and
Possible Defences [33.415612094924654]
The goal of this paper is to review the different types of attacks and defences proposed so far.
In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time.
Test time errors are activated only in the presence of a triggering event corresponding to a properly crafted input sample.
arXiv Detail & Related papers (2021-11-16T13:06:31Z) - Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
Fault-Injection Attacks [14.958919450708157]
We first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems.
We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs.
arXiv Detail & Related papers (2021-05-05T08:11:03Z) - A Review of Confidentiality Threats Against Embedded Neural Network
Models [0.0]
This review focuses on attacks targeting the confidentiality of embedded Deep Neural Network (DNN) models.
We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised.
arXiv Detail & Related papers (2021-05-04T10:27:20Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.