A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service
- URL: http://arxiv.org/abs/2003.13541v1
- Date: Mon, 30 Mar 2020 15:12:03 GMT
- Title: A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service
- Authors: Simone Disabato, Alessandro Falcetta, Alessio Mongelluzzo, Manuel
Roveri
- Abstract summary: This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
- Score: 68.84245063902908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning-as-a-service is a novel and promising computing paradigm aiming
at providing machine/deep learning solutions and mechanisms through Cloud-based
computing infrastructures. Thanks to its ability to remotely execute and train
deep learning models (that typically require high computational loads and
memory occupation), such an approach guarantees high performance, scalability,
and availability. Unfortunately, such an approach requires to send information
to be processed (e.g., signals, images, positions, sounds, videos) to the
Cloud, hence having potentially catastrophic-impacts on the privacy of users.
This paper introduces a novel distributed architecture for
deep-learning-as-a-service that is able to preserve the user sensitive data
while providing Cloud-based machine and deep learning services. The proposed
architecture, which relies on Homomorphic Encryption that is able to perform
operations on encrypted data, has been tailored for Convolutional Neural
Networks (CNNs) in the domain of image analysis and implemented through a
client-server REST-based approach. Experimental results show the effectiveness
of the proposed architecture.
Related papers
- Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Integrating Homomorphic Encryption and Trusted Execution Technology for
Autonomous and Confidential Model Refining in Cloud [4.21388107490327]
Homomorphic encryption and trusted execution environment technology can protect confidentiality for autonomous computation.
We propose to integrate these two techniques in the design of the model refining scheme.
arXiv Detail & Related papers (2023-08-02T06:31:41Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Privacy-Preserving Graph Neural Network Training and Inference as a
Cloud Service [15.939214141337803]
SecGNN is built from a synergy of insights on lightweight cryptography and machine learning techniques.
We show that SecGNN achieves comparable training and inference accuracy, with practically affordable performance.
arXiv Detail & Related papers (2022-02-16T02:57:10Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Privacy-Preserving Serverless Edge Learning with Decentralized Small
Data [13.254530176359182]
Distributed training strategies have recently become a promising approach to ensure data privacy when training deep models.
This paper extends conventional serverless platforms with serverless edge learning architectures and provides an efficient distributed training framework from the networking perspective.
arXiv Detail & Related papers (2021-11-29T21:04:49Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Complexity-aware Adaptive Training and Inference for Edge-Cloud
Distributed AI Systems [9.273593723275544]
IoT and machine learning applications create large amounts of data that require real-time processing.
We propose a distributed AI system to exploit both the edge and the cloud for training and inference.
arXiv Detail & Related papers (2021-09-14T05:03:54Z) - FDNAS: Improving Data Privacy and Model Diversity in AutoML [7.402044070683503]
We propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows hardware-aware NAS from decentralized non-iid data of clients.
To further adapt for various data distributions of clients, inspired by meta-learning, a cluster Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to achieve client-aware NAS.
arXiv Detail & Related papers (2020-11-06T14:13:42Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.