P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless
Setups
- URL: http://arxiv.org/abs/2302.13438v1
- Date: Sun, 26 Feb 2023 23:30:18 GMT
- Title: P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless
Setups
- Authors: Ioannis Arapakis, Panagiotis Papadopoulos, Kleomenis Katevas, Diego
Perino
- Abstract summary: P4L is a privacy preserving peer-to-peer learning system for users to participate in an asynchronous, collaborative learning scheme.
Our design uses strong cryptographic primitives to preserve both the confidentiality and utility of the shared gradients.
- Score: 5.601217969637838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed (or Federated) learning enables users to train machine learning
models on their very own devices, while they share only the gradients of their
models usually in a differentially private way (utility loss). Although such a
strategy provides better privacy guarantees than the traditional centralized
approach, it requires users to blindly trust a centralized infrastructure that
may also become a bottleneck with the increasing number of users. In this
paper, we design and implement P4L: a privacy preserving peer-to-peer learning
system for users to participate in an asynchronous, collaborative learning
scheme without requiring any sort of infrastructure or relying on differential
privacy. Our design uses strong cryptographic primitives to preserve both the
confidentiality and utility of the shared gradients, a set of peer-to-peer
mechanisms for fault tolerance and user churn, proximity and cross device
communications. Extensive simulations under different network settings and ML
scenarios for three real-life datasets show that P4L provides competitive
performance to baselines, while it is resilient to different poisoning attacks.
We implement P4L and experimental results show that the performance overhead
and power consumption is minimal (less than 3mAh of discharge).
Related papers
- Client Clustering Meets Knowledge Sharing: Enhancing Privacy and Robustness in Personalized Peer-to-Peer Learning [5.881825061973424]
We develop P4 (Personalized, Private, Peer-to-Peer) to deliver personalized models for resource-constrained IoT devices.<n>Our solution employs a lightweight, fully decentralized algorithm to privately detect client similarity and form collaborative groups.<n>P4 achieves 5% to 30% higher accuracy than leading differentially private peer-to-peer approaches and maintains robustness with up to 30% malicious clients.
arXiv Detail & Related papers (2025-06-25T13:27:36Z) - RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency Detection in Privacy-Preserving Federated Learning [12.804623314091508]
Federated Learning (FL) allows users to collaboratively train a global machine learning model by sharing local model only, without exposing their private data to a central server.
Study have revealed privacy vulnerabilities in FL, where adversaries can potentially infer sensitive information from the shared model parameters.
We present an efficient masking-based secure aggregation scheme utilizing lightweight cryptographic primitives to privacy risks.
arXiv Detail & Related papers (2025-02-13T06:01:09Z) - Private Federated Learning In Real World Application -- A Case Study [15.877427073033184]
This paper presents an implementation of machine learning model training using private federated learning (PFL) on edge devices.
We introduce a novel framework that uses PFL to address the challenge of training a model using users' private data.
The framework ensures that user data remain on individual devices, with only essential model updates transmitted to a central server for aggregation with privacy guarantees.
arXiv Detail & Related papers (2025-02-06T23:38:50Z) - DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - P4: Towards private, personalized, and Peer-to-Peer learning [6.693404985718457]
Two main challenges of personalization are client clustering and data privacy.
We develop P4 (Personalized Private Peer-to-Peer) a method that ensures that each client receives a personalized model.
P4 outperforms the state-of-the-art of differential private P2P by up to 40 percent in terms of accuracy.
arXiv Detail & Related papers (2024-05-27T23:04:37Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - SaFL: Sybil-aware Federated Learning with Application to Face
Recognition [13.914187113334222]
Federated Learning (FL) is a machine learning paradigm to conduct collaborative learning among clients on a joint model.
On the downside, FL raises security and privacy concerns that have just started to be studied.
This paper proposes a new defense method against poisoning attacks in FL called SaFL.
arXiv Detail & Related papers (2023-11-07T21:06:06Z) - Evaluating Privacy Leakage in Split Learning [8.841387955312669]
On-device machine learning allows us to avoid sharing raw data with a third-party server during inference.
Split Learning (SL) is a promising approach that can overcome limitations.
In SL, a large machine learning model is divided into two parts, with the bigger part residing on the server side and a smaller part executing on-device.
arXiv Detail & Related papers (2023-05-22T13:00:07Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Efficient Differentially Private Secure Aggregation for Federated
Learning via Hardness of Learning with Errors [1.4680035572775534]
Federated machine learning leverages edge computing to develop models from network user data.
Privacy in federated learning remains a major challenge.
Recent advances in emphsecure aggregation using multiparty computation eliminate the need for a third party.
We present a new federated learning protocol that leverages a novel differentially private, malicious secure aggregation protocol.
arXiv Detail & Related papers (2021-12-13T18:31:08Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - PPFL: Privacy-preserving Federated Learning with Trusted Execution
Environments [10.157652550610017]
We propose and implement a Privacy-preserving Federated Learning framework for mobile systems.
We utilize Trusted Execution Environments (TEEs) on clients for local training, and on servers for secure aggregation.
The performance evaluation of our implementation shows that PPFL can significantly improve privacy while incurring small system overheads at the client-side.
arXiv Detail & Related papers (2021-04-29T14:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.