PPFL: Privacy-preserving Federated Learning with Trusted Execution
Environments
- URL: http://arxiv.org/abs/2104.14380v1
- Date: Thu, 29 Apr 2021 14:46:16 GMT
- Title: PPFL: Privacy-preserving Federated Learning with Trusted Execution
Environments
- Authors: Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino,
Nicolas Kourtellis
- Abstract summary: We propose and implement a Privacy-preserving Federated Learning framework for mobile systems.
We utilize Trusted Execution Environments (TEEs) on clients for local training, and on servers for secure aggregation.
The performance evaluation of our implementation shows that PPFL can significantly improve privacy while incurring small system overheads at the client-side.
- Score: 10.157652550610017
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose and implement a Privacy-preserving Federated Learning (PPFL)
framework for mobile systems to limit privacy leakages in federated learning.
Leveraging the widespread presence of Trusted Execution Environments (TEEs) in
high-end and mobile devices, we utilize TEEs on clients for local training, and
on servers for secure aggregation, so that model/gradient updates are hidden
from adversaries. Challenged by the limited memory size of current TEEs, we
leverage greedy layer-wise training to train each model's layer inside the
trusted area until its convergence. The performance evaluation of our
implementation shows that PPFL can significantly improve privacy while
incurring small system overheads at the client-side. In particular, PPFL can
successfully defend the trained model against data reconstruction, property
inference, and membership inference attacks. Furthermore, it can achieve
comparable model utility with fewer communication rounds (0.54x) and a similar
amount of network traffic (1.002x) compared to the standard federated learning
of a complete model. This is achieved while only introducing up to ~15% CPU
time, ~18% memory usage, and ~21% energy consumption overhead in PPFL's
client-side.
Related papers
- FuSeFL: Fully Secure and Scalable Cross-Silo Federated Learning [0.8686220240511062]
Federated Learning (FL) enables collaborative model training without centralizing client data, making it attractive for privacy-sensitive domains.<n>We present FuSeFL, a fully secure and scalable FL scheme designed for cross-silo settings.
arXiv Detail & Related papers (2025-07-18T00:50:44Z) - Personalized Wireless Federated Learning for Large Language Models [75.22457544349668]
Large language models (LLMs) have driven profound transformations in wireless networks.<n>Within wireless environments, the training of LLMs faces significant challenges related to security and privacy.<n>This paper presents a systematic analysis of the training stages of LLMs in wireless networks, including pre-training, instruction tuning, and alignment tuning.
arXiv Detail & Related papers (2024-04-20T02:30:21Z) - HierSFL: Local Differential Privacy-aided Split Federated Learning in
Mobile Edge Computing [7.180235086275924]
Federated Learning is a promising approach for learning from user data while preserving data privacy.
Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training.
This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead.
We propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases.
arXiv Detail & Related papers (2024-01-16T09:34:10Z) - Efficient Vertical Federated Learning with Secure Aggregation [10.295508659999783]
We present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation.
We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 3.8e4 speedup compared to homomorphic encryption (HE)
arXiv Detail & Related papers (2023-05-18T18:08:36Z) - FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System [24.39699808493429]
Federated Learning trains machine learning models on distributed devices by aggregating local model updates instead of local data.
Privacy concerns arise as the aggregated local models on the server may reveal sensitive personal information by inversion attacks.
We present FedML-HE, the first practical federated learning system with efficient HE-based secure model aggregation.
arXiv Detail & Related papers (2023-03-20T02:44:35Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless
Setups [5.601217969637838]
P4L is a privacy preserving peer-to-peer learning system for users to participate in an asynchronous, collaborative learning scheme.
Our design uses strong cryptographic primitives to preserve both the confidentiality and utility of the shared gradients.
arXiv Detail & Related papers (2023-02-26T23:30:18Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.