Fault Injection on Embedded Neural Networks: Impact of a Single
Instruction Skip
- URL: http://arxiv.org/abs/2308.16665v1
- Date: Thu, 31 Aug 2023 12:14:37 GMT
- Title: Fault Injection on Embedded Neural Networks: Impact of a Single
Instruction Skip
- Authors: Clement Gaine, Pierre-Alain Moellic, Olivier Potin, Jean-Max Dutertre
- Abstract summary: We present the first set of experiments on the use of two fault injection means, electromagnetic and laser injections, applied on neural networks models embedded on a Cortex M4 32-bit microcontroller platform.
Our goal is to simulate and experimentally demonstrate the impact of a specific fault model that is instruction skip.
We reveal integrity threats by targeting several steps in the inference program of typical convolutional neural network models.
- Score: 1.3654846342364308
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the large-scale integration and use of neural network models, especially
in critical embedded systems, their security assessment to guarantee their
reliability is becoming an urgent need. More particularly, models deployed in
embedded platforms, such as 32-bit microcontrollers, are physically accessible
by adversaries and therefore vulnerable to hardware disturbances. We present
the first set of experiments on the use of two fault injection means,
electromagnetic and laser injections, applied on neural networks models
embedded on a Cortex M4 32-bit microcontroller platform. Contrary to most of
state-of-the-art works dedicated to the alteration of the internal parameters
or input values, our goal is to simulate and experimentally demonstrate the
impact of a specific fault model that is instruction skip. For that purpose, we
assessed several modification attacks on the control flow of a neural network
inference. We reveal integrity threats by targeting several steps in the
inference program of typical convolutional neural network models, which may be
exploited by an attacker to alter the predictions of the target models with
different adversarial goals.
Related papers
- MRFI: An Open Source Multi-Resolution Fault Injection Framework for
Neural Network Processing [8.871260896931211]
MRFI is a highly multi-resolution fault injection tool for deep neural networks.
It integrates extensive fault analysis functionalities from different perspectives.
It does not modify the major neural network computing framework of PyTorch.
arXiv Detail & Related papers (2023-06-20T06:46:54Z) - Evaluation of Parameter-based Attacks against Embedded Neural Networks
with Laser Injection [1.2499537119440245]
This work practically reports, for the first time, a successful variant of the Bit-Flip Attack, BFA, on a 32-bit Cortex-M microcontroller using laser fault injection.
To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model.
arXiv Detail & Related papers (2023-04-25T14:48:58Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - A Review of Confidentiality Threats Against Embedded Neural Network
Models [0.0]
This review focuses on attacks targeting the confidentiality of embedded Deep Neural Network (DNN) models.
We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised.
arXiv Detail & Related papers (2021-05-04T10:27:20Z) - Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses [4.014351341279427]
We study the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters.
We demonstrate our attacks on popular micro-controller platforms over networks of different precisions.
Countermeasures against timing-based attacks are implemented and their overheads are analyzed.
arXiv Detail & Related papers (2021-03-26T21:28:13Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.