Like an Open Book? Read Neural Network Architecture with Simple Power
Analysis on 32-bit Microcontrollers
- URL: http://arxiv.org/abs/2311.01344v2
- Date: Tue, 6 Feb 2024 13:10:23 GMT
- Title: Like an Open Book? Read Neural Network Architecture with Simple Power
Analysis on 32-bit Microcontrollers
- Authors: Raphael Joud, Pierre-Alain Moellic, Simon Pontie, Jean-Baptiste Rigaud
- Abstract summary: A neural network model's architecture is the most important information an adversary aims to recover.
For the first time, we propose an extraction methodology for traditional and CNN models running on a high-end 32-bit microcontroller.
Despite few challenging cases, we claim that, contrary to parameters extraction, the complexity of the attack is relatively low.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Model extraction is a growing concern for the security of AI systems. For
deep neural network models, the architecture is the most important information
an adversary aims to recover. Being a sequence of repeated computation blocks,
neural network models deployed on edge-devices will generate distinctive
side-channel leakages. The latter can be exploited to extract critical
information when targeted platforms are physically accessible. By combining
theoretical knowledge about deep learning practices and analysis of a
widespread implementation library (ARM CMSIS-NN), our purpose is to answer this
critical question: how far can we extract architecture information by simply
examining an EM side-channel trace? For the first time, we propose an
extraction methodology for traditional MLP and CNN models running on a high-end
32-bit microcontroller (Cortex-M7) that relies only on simple pattern
recognition analysis. Despite few challenging cases, we claim that, contrary to
parameters extraction, the complexity of the attack is relatively low and we
highlight the urgent need for practicable protections that could fit the strong
memory and latency requirements of such platforms.
Related papers
- Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - DeepTheft: Stealing DNN Model Architectures through Power Side Channel [42.380259435613354]
Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning as a Service (ML) to provide inference services.
To steal model architectures that are of valuable intellectual properties, a class of attacks has been proposed via different side-channel leakage.
We propose a new end-to-end attack, DeepTheft, to accurately recover complex DNN model architectures on general processors via the RAPL-based power side channel.
arXiv Detail & Related papers (2023-09-21T08:58:14Z) - A Practical Introduction to Side-Channel Extraction of Deep Neural
Network Parameters [0.0]
We focus this work on software implementation of deep neural networks embedded in a high-end 32-bit microcontroller (Cortex-M7)
To our knowledge, this work is the first to target such a high-end 32-bit platform. Importantly, we raise and discuss the remaining challenges for the complete extraction of a deep neural network model.
arXiv Detail & Related papers (2022-11-10T14:02:39Z) - The Dark Side of AutoML: Towards Architectural Backdoor Search [49.16544351888333]
EVAS is a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers.
EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary's design spectrum.
This work raises concerns about the current practice of NAS and points to potential directions to develop effective countermeasures.
arXiv Detail & Related papers (2022-10-21T18:13:23Z) - Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses [4.014351341279427]
We study the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters.
We demonstrate our attacks on popular micro-controller platforms over networks of different precisions.
Countermeasures against timing-based attacks are implemented and their overheads are analyzed.
arXiv Detail & Related papers (2021-03-26T21:28:13Z) - Edge-Detect: Edge-centric Network Intrusion Detection using Deep Neural
Network [0.0]
Edge nodes are crucial for detection against multitudes of cyber attacks on Internet-of-Things endpoints.
We develop a novel light, fast and accurate 'Edge-Detect' model, which detects Denial of Service attack on edge nodes using DLM techniques.
arXiv Detail & Related papers (2021-02-03T04:24:34Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Automatic Perturbation Analysis for Scalable Certified Robustness and
Beyond [171.07853346630057]
Linear relaxation based perturbation analysis (LiRPA) for neural networks has become a core component in robustness verification and certified defense.
We develop an automatic framework to enable perturbation analysis on any neural network structures.
We demonstrate LiRPA based certified defense on Tiny ImageNet and Downscaled ImageNet.
arXiv Detail & Related papers (2020-02-28T18:47:43Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.