Privacy-Preserving Hierarchical Model-Distributed Inference
- URL: http://arxiv.org/abs/2407.18353v2
- Date: Sun, 15 Sep 2024 22:27:16 GMT
- Title: Privacy-Preserving Hierarchical Model-Distributed Inference
- Authors: Fatemeh Jafarian Dehkordi, Yasaman Keshtkarjahromi, Hulya Seferoglu,
- Abstract summary: This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup.
Our goal is to speed up ML inference while providing privacy to both data and the ML model.
- Score: 4.331317259797958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup, where clients own/generate data, model owners (cloud servers) have a pre-trained ML model, and edge servers perform ML inference on clients' data using the cloud server's ML model. Our goal is to speed up ML inference while providing privacy to both data and the ML model. Our approach (i) uses model-distributed inference (model parallelization) at the edge servers and (ii) reduces the amount of communication to/from the cloud server. Our privacy-preserving hierarchical model-distributed inference, privateMDI design uses additive secret sharing and linearly homomorphic encryption to handle linear calculations in the ML inference, and garbled circuit and a novel three-party oblivious transfer are used to handle non-linear functions. privateMDI consists of offline and online phases. We designed these phases in a way that most of the data exchange is done in the offline phase while the communication overhead of the online phase is reduced. In particular, there is no communication to/from the cloud server in the online phase, and the amount of communication between the client and edge servers is minimized. The experimental results demonstrate that privateMDI significantly reduces the ML inference time as compared to the baselines.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Efficient Federated Unlearning under Plausible Deniability [1.795561427808824]
Machine unlearning addresses this by modifying the ML parameters in order to forget the influence of a specific data point on its weights.
Recent literature has highlighted that the contribution from data point(s) can be forged with some other data points in the dataset with probability close to one.
This paper introduces an efficient way to achieve federated unlearning, by employing a privacy model which allows the FL server to plausibly deny the client's participation.
arXiv Detail & Related papers (2024-10-13T18:08:24Z) - FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model [48.33280660752336]
Large language models (LLMs) show amazing performance on many domain-specific tasks after fine-tuning with some appropriate data.
Many domain-specific data are privately distributed across multiple owners.
We introduce FedBiOT, a resource-efficient LLM fine-tuning approach to federated learning.
arXiv Detail & Related papers (2024-06-25T16:45:47Z) - Safely Learning with Private Data: A Federated Learning Framework for Large Language Model [3.1077263218029105]
Federated learning (FL) is an ideal solution for training models with distributed private data.
Traditional frameworks like FedAvg are unsuitable for large language models (LLM)
We propose FL-GLM, which prevents data leakage caused by both server-side and peer-client attacks.
arXiv Detail & Related papers (2024-06-21T06:43:15Z) - Boosting Communication Efficiency of Federated Learning's Secure Aggregation [22.943966056320424]
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server.
FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model.
This poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead.
arXiv Detail & Related papers (2024-05-02T10:00:16Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - AMI-FML: A Privacy-Preserving Federated Machine Learning Framework for
AMI [2.7393821783237184]
A key challenge in developing distributed machine learning applications for AMI is to preserve user privacy while allowing active end-users participation.
This paper proposes a privacy-preserving federated learning framework for ML applications in the AMI.
We demonstrate the proposed framework on a use case federated ML (FML) application that improves short-term load forecasting (STLF)
arXiv Detail & Related papers (2021-09-13T01:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.