Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks
- URL: http://arxiv.org/abs/2308.00540v2
- Date: Thu, 08 May 2025 05:47:11 GMT
- Title: Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks
- Authors: Natalie Lang, Nir Shlezinger, Rafael G. L. D'Oliveira, Salim El Rouayheb,
- Abstract summary: Federated learning (FL) is an emerging paradigm that allows a central server to train machine learning models using remote users' data.<n>FL faces challenges in preserving the privacy of local datasets, its sensitivity to poisoning attacks by malicious users, and its communication overhead.<n>We present compressed private aggregation (CPA), that allows massive deployments to simultaneously communicate at extremely low bit rates.
- Score: 34.29747990203208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging paradigm that allows a central server to train machine learning models using remote users' data. Despite its growing popularity, FL faces challenges in preserving the privacy of local datasets, its sensitivity to poisoning attacks by malicious users, and its communication overhead. The latter is additionally considerably dominant in large-scale networks. These limitations are often individually mitigated by local differential privacy (LDP) mechanisms, robust aggregation, compression, and user selection techniques, which typically come at the cost of accuracy. In this work, we present compressed private aggregation (CPA), that allows massive deployments to simultaneously communicate at extremely low bit rates while achieving privacy, anonymity, and resilience to malicious users. CPA randomizes a codebook for compressing the data into a few bits using nested lattice quantizers, while ensuring anonymity and robustness, with a subsequent perturbation to hold LDP. The proposed CPA is proven to result in FL convergence in the same asymptotic rate as FL without privacy, compression, and robustness considerations, while satisfying both anonymity and LDP requirements. These analytical properties are empirically confirmed in a numerical study, where we demonstrate the performance gains of CPA compared with separate mechanisms for compression and privacy for training different image classification models, as well as its robustness in mitigating the harmful effects of malicious users.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - A Novel Buffered Federated Learning Framework for Privacy-Driven Anomaly Detection in IIoT [11.127334284392676]
We propose a Buffered FL (BFL) framework empowered by homomorphic encryption for anomaly detection in heterogeneous IIoT environments.
BFL utilizes a novel weighted average time approach to mitigate both straggler effects and communication bottlenecks.
Results show the superiority of BFL compared to state-of-the-art FL methods, demonstrating improved accuracy and convergence speed.
arXiv Detail & Related papers (2024-08-16T13:01:59Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning [21.27813247914949]
We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates.<n>It improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server.
arXiv Detail & Related papers (2024-06-05T17:41:42Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Joint Privacy Enhancement and Quantization in Federated Learning [23.36363480217293]
Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices.
We propose a method coined joint privacy enhancement and quantization (JoPEQ)
We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level.
arXiv Detail & Related papers (2022-08-23T11:42:58Z) - FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning [11.852346300577494]
Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data.
Recent works demonstrate that information exchanged during FL is subject to gradient-based privacy attacks.
We propose $textscFedCG$, a novel federated learning method that leverages conditional generative adversarial networks to achieve high-level privacy protection.
arXiv Detail & Related papers (2021-11-16T03:20:37Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated Learning with Local Differential Privacy: Trade-offs between
Privacy, Utility, and Communication [22.171647103023773]
Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure.
We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD.
Our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
arXiv Detail & Related papers (2021-02-09T10:04:18Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.