SECO: Secure Inference With Model Splitting Across Multi-Server Hierarchy
- URL: http://arxiv.org/abs/2404.16232v1
- Date: Wed, 24 Apr 2024 22:24:52 GMT
- Title: SECO: Secure Inference With Model Splitting Across Multi-Server Hierarchy
- Authors: Shuangyi Chen, Ashish Khisti,
- Abstract summary: We introduce SECO, a secure inference protocol that enables a user holding an input data vector and multiple server nodes deployed with a split neural network model to collaboratively compute the prediction.
We adopt multiparty homomorphic encryption and multiparty garbled circuit schemes, making the system secure against dishonest majority of semi-honest servers.
- Score: 19.481512634321376
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In the context of prediction-as-a-service, concerns about the privacy of the data and the model have been brought up and tackled via secure inference protocols. These protocols are built up by using single or multiple cryptographic tools designed under a variety of different security assumptions. In this paper, we introduce SECO, a secure inference protocol that enables a user holding an input data vector and multiple server nodes deployed with a split neural network model to collaboratively compute the prediction, without compromising either party's data privacy. We extend prior work on secure inference that requires the entire neural network model to be located on a single server node, to a multi-server hierarchy, where the user communicates to a gateway server node, which in turn communicates to remote server nodes. The inference task is split across the server nodes and must be performed over an encrypted copy of the data vector. We adopt multiparty homomorphic encryption and multiparty garbled circuit schemes, making the system secure against dishonest majority of semi-honest servers as well as protecting the partial model structure from the user. We evaluate SECO on multiple models, achieving the reduction of computation and communication cost for the user, making the protocol applicable to user's devices with limited resources.
Related papers
- CURE: Privacy-Preserving Split Learning Done Right [1.388112207221632]
Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens.
CURE is a novel system that encrypts only the server side of the model and the data.
We demonstrate CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime.
arXiv Detail & Related papers (2024-07-12T04:10:19Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Secure Inference for Vertically Partitioned Data Using Multiparty Homomorphic Encryption [15.867269549049428]
We propose a secure inference protocol for a distributed setting involving a single server node and multiple client nodes.
We assume that the observed data vector is partitioned across multiple client nodes while the deep learning model is located at the server node.
arXiv Detail & Related papers (2024-05-06T18:17:27Z) - Boosting Communication Efficiency of Federated Learning's Secure Aggregation [22.943966056320424]
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server.
FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model.
This poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead.
arXiv Detail & Related papers (2024-05-02T10:00:16Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Towards Differentially Private Text Representations [52.64048365919954]
We develop a new deep learning framework under an untrusted server setting.
For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter $epsilon$ on accuracy.
Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols.
arXiv Detail & Related papers (2020-06-25T04:42:18Z) - Serdab: An IoT Framework for Partitioning Neural Networks Computation
across Multiple Enclaves [8.550865312110911]
Serdab is a distributed orchestration framework for deploying deep neural network across multiple secure enclaves.
Our partitioning strategy achieves up to 4.7x speedup compared to executing the entire neural network in one enclave.
arXiv Detail & Related papers (2020-05-12T20:51:47Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.