P4: Towards private, personalized, and Peer-to-Peer learning
- URL: http://arxiv.org/abs/2405.17697v2
- Date: Fri, 31 May 2024 17:47:52 GMT
- Title: P4: Towards private, personalized, and Peer-to-Peer learning
- Authors: Mohammad Mahdi Maheri, Sandra Siby, Sina Abdollahi, Anastasia Borovykh, Hamed Haddadi,
- Abstract summary: Two main challenges of personalization are client clustering and data privacy.
We develop P4 (Personalized Private Peer-to-Peer) a method that ensures that each client receives a personalized model.
P4 outperforms the state-of-the-art of differential private P2P by up to 40 percent in terms of accuracy.
- Score: 6.693404985718457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized learning is a proposed approach to address the problem of data heterogeneity in collaborative machine learning. In a decentralized setting, the two main challenges of personalization are client clustering and data privacy. In this paper, we address these challenges by developing P4 (Personalized Private Peer-to-Peer) a method that ensures that each client receives a personalized model while maintaining differential privacy guarantee of each client's local dataset during and after the training. Our approach includes the design of a lightweight algorithm to identify similar clients and group them in a private, peer-to-peer (P2P) manner. Once grouped, we develop differentially-private knowledge distillation for clients to co-train with minimal impact on accuracy. We evaluate our proposed method on three benchmark datasets (FEMNIST or Federated EMNIST, CIFAR-10 and CIFAR-100) and two different neural network architectures (Linear and CNN-based networks) across a range of privacy parameters. The results demonstrate the potential of P4, as it outperforms the state-of-the-art of differential private P2P by up to 40 percent in terms of accuracy. We also show the practicality of P4 by implementing it on resource constrained devices, and validating that it has minimal overhead, e.g., about 7 seconds to run collaborative training between two clients.
Related papers
- Client Clustering Meets Knowledge Sharing: Enhancing Privacy and Robustness in Personalized Peer-to-Peer Learning [5.881825061973424]
We develop P4 (Personalized, Private, Peer-to-Peer) to deliver personalized models for resource-constrained IoT devices.<n>Our solution employs a lightweight, fully decentralized algorithm to privately detect client similarity and form collaborative groups.<n>P4 achieves 5% to 30% higher accuracy than leading differentially private peer-to-peer approaches and maintains robustness with up to 30% malicious clients.
arXiv Detail & Related papers (2025-06-25T13:27:36Z) - Federated One-Shot Learning with Data Privacy and Objective-Hiding [4.634454848598446]
Privacy in federated learning is crucial, encompassing two key aspects: safeguarding the privacy of clients' data and maintaining the privacy of the federator's objective from the clients.
We present a novel approach that addresses both concerns simultaneously, drawing inspiration from techniques in knowledge distillation and private information retrieval to provide strong information-theoretic privacy guarantees.
arXiv Detail & Related papers (2025-04-29T21:25:34Z) - Federated Learning With Individualized Privacy Through Client Sampling [2.0432201743624456]
We propose an adapted method for enabling Individualized Differential Privacy (IDP) in Federated Learning (FL)
We calculate client-specific sampling rates based on their heterogeneous privacy budgets and integrate them into a modified IDP-FedAvg algorithm.
The experimental results demonstrate that our approach achieves clear improvements over uniform DP baselines, reducing the trade-off between privacy and utility.
arXiv Detail & Related papers (2025-01-29T13:11:21Z) - Personalized Quantum Federated Learning for Privacy Image Classification [52.04404538764307]
A personalized quantum federated learning algorithm is proposed to enhance the personality of the client model in the case of an imbalanced distribution of images.
The experimental results indicate that the personalized quantum federated learning algorithm can obtain global and local models with excellent performance.
arXiv Detail & Related papers (2024-10-03T14:53:04Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach [42.59649764999974]
Federated learning aims to infer a shared model from private and decentralized data stored locally by multiple clients.
We propose a PFL algorithm named PAC-PFL for learning probabilistic models within a PAC-Bayesian framework.
Our algorithm collaboratively learns a shared hyper-posterior and regards each client's posterior inference as the step personalization.
arXiv Detail & Related papers (2024-01-16T13:30:37Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless
Setups [5.601217969637838]
P4L is a privacy preserving peer-to-peer learning system for users to participate in an asynchronous, collaborative learning scheme.
Our design uses strong cryptographic primitives to preserve both the confidentiality and utility of the shared gradients.
arXiv Detail & Related papers (2023-02-26T23:30:18Z) - FedHP: Heterogeneous Federated Learning with Privacy-preserving [0.0]
Federated learning is a distributed machine learning environment, which ensures that clients complete collaborative training without sharing private data, only by exchanging parameters.
We propose a novel federated learning method, which consists of the pre-trained model as the backbone and fully connected layers as the head.
By sharing the embedding vector of classes, instead of parameters based on gradient space, clients can better adapt to private data, and it is more efficient in the communication between the server and clients.
arXiv Detail & Related papers (2023-01-27T13:32:17Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Renyi Differential Privacy of the Subsampled Shuffle Model in
Distributed Learning [7.197592390105457]
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.
Motivated by optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round.
To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism.
arXiv Detail & Related papers (2021-07-19T11:43:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.