Revisiting Gradient Pruning: A Dual Realization for Defending against
Gradient Attacks
- URL: http://arxiv.org/abs/2401.16687v1
- Date: Tue, 30 Jan 2024 02:18:30 GMT
- Title: Revisiting Gradient Pruning: A Dual Realization for Defending against
Gradient Attacks
- Authors: Lulu Xue, Shengshan Hu, Ruizhi Zhao, Leo Yu Zhang, Shengqing Hu,
Lichao Sun, Dezhong Yao
- Abstract summary: Collaborative learning (CL) is a distributed learning framework that allows users to jointly train a model by sharing their gradient updates only.
gradient inversion attacks (GIAs), which recover users' training data from shared gradients, impose severe privacy threats to CL.
We propose a novel defense method, Dual Gradient Pruning (DGP), which can improve communication efficiency while preserving the utility and privacy of CL.
- Score: 16.037627212754487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative learning (CL) is a distributed learning framework that aims to
protect user privacy by allowing users to jointly train a model by sharing
their gradient updates only. However, gradient inversion attacks (GIAs), which
recover users' training data from shared gradients, impose severe privacy
threats to CL. Existing defense methods adopt different techniques, e.g.,
differential privacy, cryptography, and perturbation defenses, to defend
against the GIAs. Nevertheless, all current defense methods suffer from a poor
trade-off between privacy, utility, and efficiency. To mitigate the weaknesses
of existing solutions, we propose a novel defense method, Dual Gradient Pruning
(DGP), based on gradient pruning, which can improve communication efficiency
while preserving the utility and privacy of CL. Specifically, DGP slightly
changes gradient pruning with a stronger privacy guarantee. And DGP can also
significantly improve communication efficiency with a theoretical analysis of
its convergence and generalization. Our extensive experiments show that DGP can
effectively defend against the most powerful GIAs and reduce the communication
cost without sacrificing the model's utility.
Related papers
- Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Refiner: Data Refining against Gradient Leakage Attacks in Federated
Learning [28.76786159247595]
gradient leakage attacks exploit clients' uploaded gradients to reconstruct their sensitive data.
In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches.
We design Refiner that jointly optimize two metrics for privacy protection and performance maintenance.
arXiv Detail & Related papers (2022-12-05T05:36:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning [31.374376311614675]
Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
arXiv Detail & Related papers (2022-10-19T20:41:30Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage [9.83989883339971]
Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
arXiv Detail & Related papers (2022-03-29T15:59:59Z) - FedBoosting: Federated Learning with Gradient Protected Boosting for
Text Recognition [7.988454173034258]
Federated Learning (FL) framework allows learning a shared model collaboratively without data being centralized or shared among data owners.
We show in this paper that the generalization ability of the joint model is poor on Non-Independent and Non-Identically Distributed (Non-IID) data.
We propose a novel boosting algorithm for FL to address both the generalization and gradient leakage issues.
arXiv Detail & Related papers (2020-07-14T18:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.