Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning
- URL: http://arxiv.org/abs/2210.10880v2
- Date: Fri, 9 Jun 2023 23:18:21 GMT
- Title: Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning
- Authors: Ruihan Wu, Xiangyu Chen, Chuan Guo, Kilian Q. Weinberger
- Abstract summary: Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
- Score: 31.374376311614675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradient inversion attack enables recovery of training samples from model
gradients in federated learning (FL), and constitutes a serious threat to data
privacy. To mitigate this vulnerability, prior work proposed both principled
defenses based on differential privacy, as well as heuristic defenses based on
gradient compression as countermeasures. These defenses have so far been very
effective, in particular those based on gradient compression that allow the
model to maintain high accuracy while greatly reducing the effectiveness of
attacks. In this work, we argue that such findings underestimate the privacy
risk in FL. As a counterexample, we show that existing defenses can be broken
by a simple adaptive attack, where a model trained on auxiliary data is able to
invert gradients on both vision and language tasks.
Related papers
- CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling [63.07948989346385]
Federated learning collaboratively trains a neural network on a global server.
Each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data.
Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors.
We present a novel defense tailored for large neural network models.
arXiv Detail & Related papers (2025-01-27T01:06:23Z) - Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge [4.839514405631815]
Federated learning (FL) has emerged as a privacy-preserving machine learning approach.
gradient inversion attacks can exploit the gradients of FL to recreate the original user data.
We propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN)
arXiv Detail & Related papers (2024-05-06T14:29:24Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - OASIS: Offsetting Active Reconstruction Attacks in Federated Learning [14.644814818768172]
Federated Learning (FL) has garnered significant attention for its potential to protect user privacy.
Recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks.
We propose a defense mechanism based on image augmentation that effectively counteracts active reconstruction attacks.
arXiv Detail & Related papers (2023-11-23T00:05:17Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Combining Stochastic Defenses to Resist Gradient Inversion: An Ablation Study [6.766058964358335]
Common defense mechanisms such as Differential Privacy (DP) or Privacy Modules (PMs) introduce randomness during computation to prevent such attacks.
This paper introduces several targeted GI attacks that leverage this principle to bypass common defense mechanisms.
arXiv Detail & Related papers (2022-08-09T13:23:29Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Evaluating Gradient Inversion Attacks and Defenses in Federated Learning [43.993693910541275]
This paper evaluates existing attacks and defenses against gradient inversion attacks.
We show the trade-offs of privacy leakage and data utility of three proposed defense mechanisms.
Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss.
arXiv Detail & Related papers (2021-11-30T19:34:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.