Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning
- URL: http://arxiv.org/abs/2404.17984v1
- Date: Sat, 27 Apr 2024 19:17:02 GMT
- Title: Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning
- Authors: Ali Reza Ghavamipour, Benjamin Zi Hao Zhao, Fatih Turkmen,
- Abstract summary: Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation.
DL's peer-to-peer model raises challenges in protecting against inference attacks and privacy leaks.
This work proposes three secret sharing-based dropout resilience approaches for privacy-preserving DL.
- Score: 3.9166000694570076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation, enhancing scalability and efficiency. However, DL's peer-to-peer model raises challenges in protecting against inference attacks and privacy leaks. By forgoing central bottlenecks, DL demands privacy-preserving aggregation methods to protect data from 'honest but curious' clients and adversaries, maintaining network-wide privacy. Privacy-preserving DL faces the additional hurdle of client dropout, clients not submitting updates due to connectivity problems or unavailability, further complicating aggregation. This work proposes three secret sharing-based dropout resilience approaches for privacy-preserving DL. Our study evaluates the efficiency, performance, and accuracy of these protocols through experiments on datasets such as MNIST, Fashion-MNIST, SVHN, and CIFAR-10. We compare our protocols with traditional secret-sharing solutions across scenarios, including those with up to 1000 clients. Evaluations show that our protocols significantly outperform conventional methods, especially in scenarios with up to 30% of clients dropout and model sizes of up to $10^6$ parameters. Our approaches demonstrate markedly high efficiency with larger models, higher dropout rates, and extensive client networks, highlighting their effectiveness in enhancing decentralized learning systems' privacy and dropout robustness.
Related papers
- Efficient Federated Unlearning with Adaptive Differential Privacy Preservation [15.8083997286637]
Federated unlearning (FU) offers a promising solution to erase the impact of specific clients' data on the global model in federated learning (FL)
Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates.
We propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU.
arXiv Detail & Related papers (2024-11-17T11:45:15Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - Robust and Actively Secure Serverless Collaborative Learning [48.01929996757643]
Collaborative machine learning (ML) is widely used to enable institutions to learn better models from distributed data.
While collaborative approaches to learning intuitively protect user data, they remain vulnerable to either the server, the clients, or both.
We propose a peer-to-peer (P2P) learning scheme that is secure against malicious servers and robust to malicious clients.
arXiv Detail & Related papers (2023-10-25T14:43:03Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.