A Practical Introduction to Side-Channel Extraction of Deep Neural
Network Parameters
- URL: http://arxiv.org/abs/2211.05590v1
- Date: Thu, 10 Nov 2022 14:02:39 GMT
- Title: A Practical Introduction to Side-Channel Extraction of Deep Neural
Network Parameters
- Authors: Raphael Joud, Pierre-Alain Moellic, Simon Pontie, Jean-Baptiste Rigaud
- Abstract summary: We focus this work on software implementation of deep neural networks embedded in a high-end 32-bit microcontroller (Cortex-M7)
To our knowledge, this work is the first to target such a high-end 32-bit platform. Importantly, we raise and discuss the remaining challenges for the complete extraction of a deep neural network model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Model extraction is a major threat for embedded deep neural network models
that leverages an extended attack surface. Indeed, by physically accessing a
device, an adversary may exploit side-channel leakages to extract critical
information of a model (i.e., its architecture or internal parameters).
Different adversarial objectives are possible including a fidelity-based
scenario where the architecture and parameters are precisely extracted (model
cloning). We focus this work on software implementation of deep neural networks
embedded in a high-end 32-bit microcontroller (Cortex-M7) and expose several
challenges related to fidelity-based parameters extraction through side-channel
analysis, from the basic multiplication operation to the feed-forward
connection through the layers. To precisely extract the value of parameters
represented in the single-precision floating point IEEE-754 standard, we
propose an iterative process that is evaluated with both simulations and traces
from a Cortex-M7 target. To our knowledge, this work is the first to target
such an high-end 32-bit platform. Importantly, we raise and discuss the
remaining challenges for the complete extraction of a deep neural network
model, more particularly the critical case of biases.
Related papers
- SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning [49.83621156017321]
SimBa is an architecture designed to scale up parameters in deep RL by injecting a simplicity bias.
By scaling up parameters with SimBa, the sample efficiency of various deep RL algorithms-including off-policy, on-policy, and unsupervised methods-is consistently improved.
arXiv Detail & Related papers (2024-10-13T07:20:53Z) - Computer Vision Model Compression Techniques for Embedded Systems: A Survey [75.38606213726906]
This paper covers the main model compression techniques applied for computer vision tasks.
We present the characteristics of compression subareas, compare different approaches, and discuss how to choose the best technique.
We also share codes to assist researchers and new practitioners in overcoming initial implementation challenges.
arXiv Detail & Related papers (2024-08-15T16:41:55Z) - EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - Like an Open Book? Read Neural Network Architecture with Simple Power
Analysis on 32-bit Microcontrollers [0.0]
A neural network model's architecture is the most important information an adversary aims to recover.
For the first time, we propose an extraction methodology for traditional and CNN models running on a high-end 32-bit microcontroller.
Despite few challenging cases, we claim that, contrary to parameters extraction, the complexity of the attack is relatively low.
arXiv Detail & Related papers (2023-11-02T15:55:20Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Model Inspired Autoencoder for Unsupervised Hyperspectral Image
Super-Resolution [25.878793557013207]
This paper focuses on hyperspectral image (HSI) super-resolution that aims to fuse a low-spatial-resolution HSI and a high-spatial-resolution multispectral image.
Existing deep learning-based approaches are mostly supervised that rely on a large number of labeled training samples.
We make the first attempt to design a model inspired deep network for HSI super-resolution in an unsupervised manner.
arXiv Detail & Related papers (2021-10-22T05:15:16Z) - Conditionally Parameterized, Discretization-Aware Neural Networks for
Mesh-Based Modeling of Physical Systems [0.0]
We generalize the idea of conditional parametrization -- using trainable functions of input parameters.
We show that conditionally parameterized networks provide superior performance compared to their traditional counterparts.
A network architecture named CP-GNet is also proposed as the first deep learning model capable of reacting standalone prediction of flows on meshes.
arXiv Detail & Related papers (2021-09-15T20:21:13Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Dataless Model Selection with the Deep Frame Potential [45.16941644841897]
We quantify networks by their intrinsic capacity for unique and robust representations.
We propose the deep frame potential: a measure of coherence that is approximately related to representation stability but has minimizers that depend only on network structure.
We validate its use as a criterion for model selection and demonstrate correlation with generalization error on a variety of common residual and densely connected network architectures.
arXiv Detail & Related papers (2020-03-30T23:27:25Z) - SideInfNet: A Deep Neural Network for Semi-Automatic Semantic
Segmentation with Side Information [83.03179580646324]
This paper proposes a novel deep neural network architecture, namely SideInfNet.
It integrates features learnt from images with side information extracted from user annotations.
To evaluate our method, we applied the proposed network to three semantic segmentation tasks and conducted extensive experiments on benchmark datasets.
arXiv Detail & Related papers (2020-02-07T06:10:54Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.