RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems
- URL: http://arxiv.org/abs/2208.13032v1
- Date: Sat, 27 Aug 2022 14:50:00 GMT
- Title: RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems
- Authors: Emna Baccour, Aiman Erbad, Amr Mohamed, Mounir Hamdi, Mohsen Guizani
- Abstract summary: We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
- Score: 41.1371349978643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although Deep Neural Networks (DNN) have become the backbone technology of
several ubiquitous applications, their deployment in resource-constrained
machines, e.g., Internet of Things (IoT) devices, is still challenging. To
satisfy the resource requirements of such a paradigm, collaborative deep
inference with IoT synergy was introduced. However, the distribution of DNN
networks suffers from severe data leakage. Various threats have been presented,
including black-box attacks, where malicious participants can recover arbitrary
inputs fed into their devices. Although many countermeasures were designed to
achieve privacy-preserving DNN, most of them result in additional computation
and lower accuracy. In this paper, we present an approach that targets the
security of collaborative deep inference via re-thinking the distribution
strategy, without sacrificing the model performance. Particularly, we examine
different DNN partitions that make the model susceptible to black-box threats
and we derive the amount of data that should be allocated per device to hide
proprieties of the original input. We formulate this methodology, as an
optimization, where we establish a trade-off between the latency of
co-inference and the privacy-level of data. Next, to relax the optimal
solution, we shape our approach as a Reinforcement Learning (RL) design that
supports heterogeneous devices as well as multiple DNNs/datasets.
Related papers
- Enhanced Convolution Neural Network with Optimized Pooling and Hyperparameter Tuning for Network Intrusion Detection [0.0]
We propose an Enhanced Convolutional Neural Network (EnCNN) for Network Intrusion Detection Systems (NIDS)
We compare EnCNN with various machine learning algorithms, including Logistic Regression, Decision Trees, Support Vector Machines (SVM), and ensemble methods like Random Forest, AdaBoost, and Voting Ensemble.
The results show that EnCNN significantly improves detection accuracy, with a notable 10% increase over state-of-art approaches.
arXiv Detail & Related papers (2024-09-27T11:20:20Z) - Model Agnostic Hybrid Sharding For Heterogeneous Distributed Inference [11.39873199479642]
Nesa introduces a model-agnostic sharding framework designed for decentralized AI inference.
Our framework uses blockchain-based deep neural network sharding to distribute computational tasks across a diverse network of nodes.
Our results highlight the potential to democratize access to cutting-edge AI technologies.
arXiv Detail & Related papers (2024-07-29T08:18:48Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - A Review of Confidentiality Threats Against Embedded Neural Network
Models [0.0]
This review focuses on attacks targeting the confidentiality of embedded Deep Neural Network (DNN) models.
We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised.
arXiv Detail & Related papers (2021-05-04T10:27:20Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - Industrial Scale Privacy Preserving Deep Neural Network [23.690146141150407]
We propose an industrial scale privacy preserving neural network learning paradigm, which is secure against semi-honest adversaries.
We conduct experiments on real-world fraud detection dataset and financial distress prediction dataset.
arXiv Detail & Related papers (2020-03-11T10:15:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.