A Review of Confidentiality Threats Against Embedded Neural Network
Models
- URL: http://arxiv.org/abs/2105.01401v1
- Date: Tue, 4 May 2021 10:27:20 GMT
- Title: A Review of Confidentiality Threats Against Embedded Neural Network
Models
- Authors: Rapha\"el Joud, Pierre-Alain Moellic, R\'emi Bernhard, Jean-Baptiste
Rigaud
- Abstract summary: This review focuses on attacks targeting the confidentiality of embedded Deep Neural Network (DNN) models.
We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Utilization of Machine Learning (ML) algorithms, especially Deep Neural
Network (DNN) models, becomes a widely accepted standard in many domains more
particularly IoT-based systems. DNN models reach impressive performances in
several sensitive fields such as medical diagnosis, smart transport or security
threat detection, and represent a valuable piece of Intellectual Property. Over
the last few years, a major trend is the large-scale deployment of models in a
wide variety of devices. However, this migration to embedded systems is slowed
down because of the broad spectrum of attacks threatening the integrity,
confidentiality and availability of embedded models. In this review, we cover
the landscape of attacks targeting the confidentiality of embedded DNN models
that may have a major impact on critical IoT systems, with a particular focus
on model extraction and data leakage. We highlight the fact that Side-Channel
Analysis (SCA) is a relatively unexplored bias by which model's confidentiality
can be compromised. Input data, architecture or parameters of a model can be
extracted from power or electromagnetic observations, testifying a real need
from a security point of view.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Malware Classification using Deep Neural Networks: Performance
Evaluation and Applications in Edge Devices [0.0]
Multiple Deep Neural Networks (DNNs) can be designed to detect and classify malware binaries.
The feasibility of deploying these DNN models on edge devices to enable real-time classification, particularly in resource-constrained scenarios proves to be integral to large IoT systems.
This study contributes to advancing malware detection techniques and emphasizes the significance of integrating cybersecurity measures for the early detection of malware.
arXiv Detail & Related papers (2023-08-21T16:34:46Z) - Leveraging a Probabilistic PCA Model to Understand the Multivariate
Statistical Network Monitoring Framework for Network Security Anomaly
Detection [64.1680666036655]
We revisit anomaly detection techniques based on PCA from a probabilistic generative model point of view.
We have evaluated the mathematical model using two different datasets.
arXiv Detail & Related papers (2023-02-02T13:41:18Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - An Overview of Laser Injection against Embedded Neural Network Models [0.0]
Fault Injection Analysis (FIA) are known to be very powerful with a large spectrum of attack vectors.
Here, we propose to discuss how laser injection with state-of-the-art equipment, combined with theoretical evidences from Adversarial Machine Learning, highlights worrying threats against the integrity of deep learning inference.
arXiv Detail & Related papers (2021-05-04T10:32:30Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - DeepMAL -- Deep Learning Models for Malware Traffic Detection and
Classification [4.187494796512101]
We introduce DeepMAL, a DL model which is able to capture the underlying statistics of malicious traffic.
We show that DeepMAL can detect and classify malware flows with high accuracy, outperforming traditional, shallow-like models.
arXiv Detail & Related papers (2020-03-03T16:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.