Sparsified Secure Aggregation for Privacy-Preserving Federated Learning
- URL: http://arxiv.org/abs/2112.12872v1
- Date: Thu, 23 Dec 2021 22:44:21 GMT
- Title: Sparsified Secure Aggregation for Privacy-Preserving Federated Learning
- Authors: Irem Ergun, Hasin Us Sami, Basak Guler
- Abstract summary: We propose a lightweight gradient sparsification framework for secure aggregation.
Our theoretical analysis demonstrates that the proposed framework can significantly reduce the communication overhead of secure aggregation.
Our experiments demonstrate that our framework reduces the communication overhead by up to 7.8x, while also speeding up the wall clock training time by 1.13x, when compared to conventional secure aggregation benchmarks.
- Score: 1.2891210250935146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Secure aggregation is a popular protocol in privacy-preserving federated
learning, which allows model aggregation without revealing the individual
models in the clear. On the other hand, conventional secure aggregation
protocols incur a significant communication overhead, which can become a major
bottleneck in real-world bandwidth-limited applications. Towards addressing
this challenge, in this work we propose a lightweight gradient sparsification
framework for secure aggregation, in which the server learns the aggregate of
the sparsified local model updates from a large number of users, but without
learning the individual parameters. Our theoretical analysis demonstrates that
the proposed framework can significantly reduce the communication overhead of
secure aggregation while ensuring comparable computational complexity. We
further identify a trade-off between privacy and communication efficiency due
to sparsification. Our experiments demonstrate that our framework reduces the
communication overhead by up to 7.8x, while also speeding up the wall clock
training time by 1.13x, when compared to conventional secure aggregation
benchmarks.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization [12.83265009728818]
We propose a novel uplink communication compression method for federated learning, named FedMPQ.
In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed.
Experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy.
arXiv Detail & Related papers (2024-04-21T08:27:36Z) - Scalable Federated Unlearning via Isolated and Coded Sharding [76.12847512410767]
Federated unlearning has emerged as a promising paradigm to erase the client-level data effect.
This paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing.
arXiv Detail & Related papers (2024-01-29T08:41:45Z) - Efficient and Secure Federated Learning for Financial Applications [15.04345368582332]
This article proposes two sparsification methods to reduce communication cost in federated learning.
One is a time-varying hierarchical sparsification method for model parameter update, which solves the problem of maintaining model accuracy after high ratio sparsity.
The other is to apply the sparsification method to the secure aggregation framework.
arXiv Detail & Related papers (2023-03-15T04:15:51Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.