An Overview of Laser Injection against Embedded Neural Network Models
- URL: http://arxiv.org/abs/2105.01403v1
- Date: Tue, 4 May 2021 10:32:30 GMT
- Title: An Overview of Laser Injection against Embedded Neural Network Models
- Authors: Mathieu Dumont, Pierre-Alain Moellic, Raphael Viera, Jean-Max
Dutertre, R\'emi Bernhard
- Abstract summary: Fault Injection Analysis (FIA) are known to be very powerful with a large spectrum of attack vectors.
Here, we propose to discuss how laser injection with state-of-the-art equipment, combined with theoretical evidences from Adversarial Machine Learning, highlights worrying threats against the integrity of deep learning inference.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For many IoT domains, Machine Learning and more particularly Deep Learning
brings very efficient solutions to handle complex data and perform challenging
and mostly critical tasks. However, the deployment of models in a large variety
of devices faces several obstacles related to trust and security. The latest is
particularly critical since the demonstrations of severe flaws impacting the
integrity, confidentiality and accessibility of neural network models. However,
the attack surface of such embedded systems cannot be reduced to abstract flaws
but must encompass the physical threats related to the implementation of these
models within hardware platforms (e.g., 32-bit microcontrollers). Among
physical attacks, Fault Injection Analysis (FIA) are known to be very powerful
with a large spectrum of attack vectors. Most importantly, highly focused FIA
techniques such as laser beam injection enable very accurate evaluation of the
vulnerabilities as well as the robustness of embedded systems. Here, we propose
to discuss how laser injection with state-of-the-art equipment, combined with
theoretical evidences from Adversarial Machine Learning, highlights worrying
threats against the integrity of deep learning inference and claims that join
efforts from the theoretical AI and Physical Security communities are a urgent
need.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Fault Injection on Embedded Neural Networks: Impact of a Single
Instruction Skip [1.3654846342364308]
We present the first set of experiments on the use of two fault injection means, electromagnetic and laser injections, applied on neural networks models embedded on a Cortex M4 32-bit microcontroller platform.
Our goal is to simulate and experimentally demonstrate the impact of a specific fault model that is instruction skip.
We reveal integrity threats by targeting several steps in the inference program of typical convolutional neural network models.
arXiv Detail & Related papers (2023-08-31T12:14:37Z) - Evaluation of Parameter-based Attacks against Embedded Neural Networks
with Laser Injection [1.2499537119440245]
This work practically reports, for the first time, a successful variant of the Bit-Flip Attack, BFA, on a 32-bit Cortex-M microcontroller using laser fault injection.
To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model.
arXiv Detail & Related papers (2023-04-25T14:48:58Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - A Review of Confidentiality Threats Against Embedded Neural Network
Models [0.0]
This review focuses on attacks targeting the confidentiality of embedded Deep Neural Network (DNN) models.
We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised.
arXiv Detail & Related papers (2021-05-04T10:27:20Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Artificial Neural Networks and Fault Injection Attacks [7.601937548486356]
This chapter is on the security assessment of artificial intelligence (AI) and neural network (NN) accelerators in the face of fault injection attacks.
It discusses the assets on these platforms and compares them with ones known and well-studied in the field of cryptographic systems.
arXiv Detail & Related papers (2020-08-17T03:29:57Z) - Security and Machine Learning in the Real World [33.40597438876848]
We build on our experience evaluating the security of a machine learning software product deployed on a large scale to broaden the conversation to include a systems security view of vulnerabilities.
We propose a list of short-term mitigation suggestions that practitioners deploying machine learning modules can use to secure their systems.
arXiv Detail & Related papers (2020-07-13T16:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.