Gradient Leakage Defense with Key-Lock Module for Federated Learning
- URL: http://arxiv.org/abs/2305.04095v1
- Date: Sat, 6 May 2023 16:47:52 GMT
- Title: Gradient Leakage Defense with Key-Lock Module for Federated Learning
- Authors: Hanchi Ren and Jingjing Deng and Xianghua Xie and Xiaoke Ma and
Jianfeng Ma
- Abstract summary: Federated Learning (FL) is a widely adopted privacy-preserving machine learning approach.
Recent findings reveal that privacy may be compromised and sensitive information potentially recovered from shared gradients.
We propose a new gradient leakage defense technique that secures arbitrary model architectures using a private key-lock module.
- Score: 14.411227689702997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a widely adopted privacy-preserving machine
learning approach where private data remains local, enabling secure
computations and the exchange of local model gradients between local clients
and third-party parameter servers. However, recent findings reveal that privacy
may be compromised and sensitive information potentially recovered from shared
gradients. In this study, we offer detailed analysis and a novel perspective on
understanding the gradient leakage problem. These theoretical works lead to a
new gradient leakage defense technique that secures arbitrary model
architectures using a private key-lock module. Only the locked gradient is
transmitted to the parameter server for global model aggregation. Our proposed
learning method is resistant to gradient leakage attacks, and the key-lock
module is designed and trained to ensure that, without the private information
of the key-lock module: a) reconstructing private training data from the shared
gradient is infeasible; and b) the global model's inference performance is
significantly compromised. We discuss the theoretical underpinnings of why
gradients can leak private information and provide theoretical proof of our
method's effectiveness. We conducted extensive empirical evaluations with a
total of forty-four models on several popular benchmarks, demonstrating the
robustness of our proposed approach in both maintaining model performance and
defending against gradient leakage attacks.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Gradients Stand-in for Defending Deep Leakage in Federated Learning [0.0]
This study introduces a novel, efficacious method aimed at safeguarding against gradient leakage, namely, AdaDefense"
This proposed approach not only effectively prevents gradient leakage, but also ensures that the overall performance of the model remains largely unaffected.
arXiv Detail & Related papers (2024-10-11T11:44:13Z) - Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients [11.6665056456826]
gradients from a single Transformer layer, or even a single linear component with 0.54% parameters, are susceptible to training data leakage.
Applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure.
arXiv Detail & Related papers (2024-06-03T05:15:04Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Mixed Precision Quantization to Tackle Gradient Leakage Attacks in
Federated Learning [1.7205106391379026]
Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing.
This approach shows vulnerabilities when privacy inference attacks are applied to it.
In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture.
arXiv Detail & Related papers (2022-10-22T04:24:32Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage [9.83989883339971]
Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
arXiv Detail & Related papers (2022-03-29T15:59:59Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - FedBoosting: Federated Learning with Gradient Protected Boosting for
Text Recognition [7.988454173034258]
Federated Learning (FL) framework allows learning a shared model collaboratively without data being centralized or shared among data owners.
We show in this paper that the generalization ability of the joint model is poor on Non-Independent and Non-Identically Distributed (Non-IID) data.
We propose a novel boosting algorithm for FL to address both the generalization and gradient leakage issues.
arXiv Detail & Related papers (2020-07-14T18:47:23Z) - Understanding Gradient Clipping in Private SGD: A Geometric Perspective [68.61254575987013]
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
Many learning systems now incorporate differential privacy by training their models with (differentially) private SGD.
A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its L2 norm exceeds some threshold.
arXiv Detail & Related papers (2020-06-27T19:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.