Aggregating Gradients in Encoded Domain for Federated Learning
- URL: http://arxiv.org/abs/2205.13216v1
- Date: Thu, 26 May 2022 08:20:19 GMT
- Title: Aggregating Gradients in Encoded Domain for Federated Learning
- Authors: Dun Zeng, Shiyu Liu, Zenglin Xu
- Abstract summary: Malicious attackers and an honest-but-curious server can steal private client data from uploaded gradients in federated learning.
We propose the textttFedAGE framework, which enables the server to aggregate gradients in an encoded domain without accessing raw gradients of any single client.
- Score: 19.12395694047359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Malicious attackers and an honest-but-curious server can steal private client
data from uploaded gradients in federated learning. Although current protection
methods (e.g., additive homomorphic cryptosystem) can guarantee the security of
the federated learning system, they bring additional computation and
communication costs. To mitigate the cost, we propose the \texttt{FedAGE}
framework, which enables the server to aggregate gradients in an encoded domain
without accessing raw gradients of any single client. Thus, \texttt{FedAGE} can
prevent the curious server from gradient stealing while maintaining the same
prediction performance without additional communication costs. Furthermore, we
theoretically prove that the proposed encoding-decoding framework is a Gaussian
mechanism for differential privacy. Finally, we evaluate \texttt{FedAGE} under
several federated settings, and the results have demonstrated the efficacy of
the proposed framework.
Related papers
- GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users [19.209830150036254]
federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm.
Next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server.
This paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme.
arXiv Detail & Related papers (2023-06-08T11:20:00Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Gradient Obfuscation Gives a False Sense of Security in Federated
Learning [41.36621813381792]
We present a new data reconstruction attack framework targeting the image classification task in federated learning.
Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression.
arXiv Detail & Related papers (2022-06-08T13:01:09Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Byzantine-Robust and Privacy-Preserving Framework for FedML [10.124385820546014]
Federated learning has emerged as a popular paradigm for collaboratively training a model from data distributed among a set of clients.
This learning setting presents two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.
We propose a two-pronged solution that aims to address both challenges under a single framework.
arXiv Detail & Related papers (2021-05-05T19:36:21Z) - FedBoosting: Federated Learning with Gradient Protected Boosting for
Text Recognition [7.988454173034258]
Federated Learning (FL) framework allows learning a shared model collaboratively without data being centralized or shared among data owners.
We show in this paper that the generalization ability of the joint model is poor on Non-Independent and Non-Identically Distributed (Non-IID) data.
We propose a novel boosting algorithm for FL to address both the generalization and gradient leakage issues.
arXiv Detail & Related papers (2020-07-14T18:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.