Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses
- URL: http://arxiv.org/abs/2103.14739v1
- Date: Fri, 26 Mar 2021 21:28:13 GMT
- Title: Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses
- Authors: Saurav Maji, Utsav Banerjee, and Anantha P. Chandrakasan
- Abstract summary: We study the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters.
We demonstrate our attacks on popular micro-controller platforms over networks of different precisions.
Countermeasures against timing-based attacks are implemented and their overheads are analyzed.
- Score: 4.014351341279427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the recent advancements in machine learning theory, many commercial
embedded micro-processors use neural network models for a variety of signal
processing applications. However, their associated side-channel security
vulnerabilities pose a major concern. There have been several proof-of-concept
attacks demonstrating the extraction of their model parameters and input data.
But, many of these attacks involve specific assumptions, have limited
applicability, or pose huge overheads to the attacker. In this work, we study
the side-channel vulnerabilities of embedded neural network implementations by
recovering their parameters using timing-based information leakage and simple
power analysis side-channel attacks. We demonstrate our attacks on popular
micro-controller platforms over networks of different precisions such as
floating point, fixed point, binary networks. We are able to successfully
recover not only the model parameters but also the inputs for the above
networks. Countermeasures against timing-based attacks are implemented and
their overheads are analyzed.
Related papers
- When Side-Channel Attacks Break the Black-Box Property of Embedded
Artificial Intelligence [0.8192907805418583]
deep neural networks (DNNs) are subject to malicious examples designed in a way to fool the network while being undetectable to the human observer.
We propose an architecture-agnostic attack which solve this constraint by extracting the logits.
Our method combines hardware and software attacks, by performing a side-channel attack that exploits electromagnetic leakages.
arXiv Detail & Related papers (2023-11-23T13:41:22Z) - Fault Injection on Embedded Neural Networks: Impact of a Single
Instruction Skip [1.3654846342364308]
We present the first set of experiments on the use of two fault injection means, electromagnetic and laser injections, applied on neural networks models embedded on a Cortex M4 32-bit microcontroller platform.
Our goal is to simulate and experimentally demonstrate the impact of a specific fault model that is instruction skip.
We reveal integrity threats by targeting several steps in the inference program of typical convolutional neural network models.
arXiv Detail & Related papers (2023-08-31T12:14:37Z) - Adversarial Attacks on Leakage Detectors in Water Distribution Networks [6.125017875330933]
We propose a taxonomy for adversarial attacks against machine learning based leakage detectors in water distribution networks.
Based on a mathematical formalization of the least sensitive point problem, we use three different algorithmic approaches to find a solution.
arXiv Detail & Related papers (2023-05-25T12:05:18Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Enhancing Robustness Against Adversarial Examples in Network Intrusion
Detection Systems [1.7386735294534732]
RePO is a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting.
Our evaluation shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting.
arXiv Detail & Related papers (2020-08-09T07:04:06Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.