Reconciling Security and Communication Efficiency in Federated Learning
- URL: http://arxiv.org/abs/2207.12779v1
- Date: Tue, 26 Jul 2022 09:52:55 GMT
- Title: Reconciling Security and Communication Efficiency in Federated Learning
- Authors: Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan
Yousefpour, Pierre Stock
- Abstract summary: Cross-device Federated Learning is an increasingly popular machine learning setting.
In this paper, we formalize and address the problem of compressing client-to-server model updates.
We establish state-of-the-art results on LEAF benchmarks in a secure Federated Learning setup with up to 40$times$ compression in uplink communication.
- Score: 11.653872280157321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-device Federated Learning is an increasingly popular machine learning
setting to train a model by leveraging a large population of client devices
with high privacy and security guarantees. However, communication efficiency
remains a major bottleneck when scaling federated learning to production
environments, particularly due to bandwidth constraints during uplink
communication. In this paper, we formalize and address the problem of
compressing client-to-server model updates under the Secure Aggregation
primitive, a core component of Federated Learning pipelines that allows the
server to aggregate the client updates without accessing them individually. In
particular, we adapt standard scalar quantization and pruning methods to Secure
Aggregation and propose Secure Indexing, a variant of Secure Aggregation that
supports quantization for extreme compression. We establish state-of-the-art
results on LEAF benchmarks in a secure Federated Learning setup with up to
40$\times$ compression in uplink communication with no meaningful loss in
utility compared to uncompressed baselines.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification [49.66272783945571]
Federated Knowledge Graphs Embedding learning (FKGE) encounters challenges in communication efficiency stemming from the considerable size of parameters and extensive communication rounds.
We propose bidirectional communication-efficient FedS based on Entity-Wise Top-K Sparsification strategy.
arXiv Detail & Related papers (2024-06-19T05:26:02Z) - EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters [3.9660142560142067]
Federated Learning (FL) enables model training across decentralized devices by communicating solely local model updates to an aggregation server.
FL remains vulnerable to inference attacks during model update transmissions.
We present EncCluster, a novel method that integrates model compression through weight clustering with recent decentralized FE and privacy-enhancing data encoding.
arXiv Detail & Related papers (2024-06-13T14:16:50Z) - FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization [12.83265009728818]
We propose a novel uplink communication compression method for federated learning, named FedMPQ.
In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed.
Experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy.
arXiv Detail & Related papers (2024-04-21T08:27:36Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - SoteriaFL: A Unified Framework for Private Federated Learning with
Communication Compression [40.646108010388986]
We propose a unified framework that enhances the communication efficiency of private federated learning with communication compression.
We provide a comprehensive characterization of its performance trade-offs in terms of privacy, utility, and communication complexity.
arXiv Detail & Related papers (2022-06-20T16:47:58Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Byzantine-robust Federated Learning through Spatial-temporal Analysis of
Local Model Updates [6.758334200305236]
Federated Learning (FL) enables multiple distributed clients (e.g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
In this paper, we propose to mitigate these failures and attacks from a spatial-temporal perspective.
Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space.
arXiv Detail & Related papers (2021-07-03T18:48:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.