Integrating Homomorphic Encryption and Trusted Execution Technology for
Autonomous and Confidential Model Refining in Cloud
- URL: http://arxiv.org/abs/2308.00963v1
- Date: Wed, 2 Aug 2023 06:31:41 GMT
- Title: Integrating Homomorphic Encryption and Trusted Execution Technology for
Autonomous and Confidential Model Refining in Cloud
- Authors: Pinglan Liu and Wensheng Zhang
- Abstract summary: Homomorphic encryption and trusted execution environment technology can protect confidentiality for autonomous computation.
We propose to integrate these two techniques in the design of the model refining scheme.
- Score: 4.21388107490327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the popularity of cloud computing and machine learning, it has been a
trend to outsource machine learning processes (including model training and
model-based inference) to cloud. By the outsourcing, other than utilizing the
extensive and scalable resource offered by the cloud service provider, it will
also be attractive to users if the cloud servers can manage the machine
learning processes autonomously on behalf of the users. Such a feature will be
especially salient when the machine learning is expected to be a long-term
continuous process and the users are not always available to participate. Due
to security and privacy concerns, it is also desired that the autonomous
learning preserves the confidentiality of users' data and models involved.
Hence, in this paper, we aim to design a scheme that enables autonomous and
confidential model refining in cloud. Homomorphic encryption and trusted
execution environment technology can protect confidentiality for autonomous
computation, but each of them has their limitations respectively and they are
complementary to each other. Therefore, we further propose to integrate these
two techniques in the design of the model refining scheme. Through
implementation and experiments, we evaluate the feasibility of our proposed
scheme. The results indicate that, with our proposed scheme the cloud server
can autonomously refine an encrypted model with newly provided encrypted
training data to continuously improve its accuracy. Though the efficiency is
still significantly lower than the baseline scheme that refines plaintext-model
with plaintext-data, we expect that it can be improved by fully utilizing the
higher level of parallelism and the computational power of GPU at the cloud
server.
Related papers
- Learning in the Dark: Privacy-Preserving Machine Learning using Function Approximation [1.8907108368038215]
Learning in the Dark is a privacy-preserving machine learning model that can classify encrypted images with high accuracy.
It is capable of performing high accuracy predictions by performing computations directly on encrypted data.
arXiv Detail & Related papers (2023-09-15T06:45:58Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Edge-Cloud Polarization and Collaboration: A Comprehensive Survey [61.05059817550049]
We conduct a systematic review for both cloud and edge AI.
We are the first to set up the collaborative learning mechanism for cloud and edge modeling.
We discuss potentials and practical experiences of some on-going advanced edge AI topics.
arXiv Detail & Related papers (2021-11-11T05:58:23Z) - Complexity-aware Adaptive Training and Inference for Edge-Cloud
Distributed AI Systems [9.273593723275544]
IoT and machine learning applications create large amounts of data that require real-time processing.
We propose a distributed AI system to exploit both the edge and the cloud for training and inference.
arXiv Detail & Related papers (2021-09-14T05:03:54Z) - Auto-Split: A General Framework of Collaborative Edge-Cloud AI [49.750972428032355]
This paper describes the techniques and engineering practice behind Auto-Split, an edge-cloud collaborative prototype of Huawei Cloud.
To the best of our knowledge, there is no existing industry product that provides the capability of Deep Neural Network (DNN) splitting.
arXiv Detail & Related papers (2021-08-30T08:03:29Z) - Device-Cloud Collaborative Learning for Recommendation [50.01289274123047]
We propose a novel MetaPatch learning approach on the device side to efficiently achieve "thousands of people with thousands of models" given a centralized cloud model.
With billions of updated personalized device models, we propose a "model-over-models" distillation algorithm, namely MoMoDistill, to update the centralized cloud model.
arXiv Detail & Related papers (2021-04-14T05:06:59Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.