Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning
- URL: http://arxiv.org/abs/2210.10880v2
- Date: Fri, 9 Jun 2023 23:18:21 GMT
- Title: Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning
- Authors: Ruihan Wu, Xiangyu Chen, Chuan Guo, Kilian Q. Weinberger
- Abstract summary: Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
- Score: 31.374376311614675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradient inversion attack enables recovery of training samples from model
gradients in federated learning (FL), and constitutes a serious threat to data
privacy. To mitigate this vulnerability, prior work proposed both principled
defenses based on differential privacy, as well as heuristic defenses based on
gradient compression as countermeasures. These defenses have so far been very
effective, in particular those based on gradient compression that allow the
model to maintain high accuracy while greatly reducing the effectiveness of
attacks. In this work, we argue that such findings underestimate the privacy
risk in FL. As a counterexample, we show that existing defenses can be broken
by a simple adaptive attack, where a model trained on auxiliary data is able to
invert gradients on both vision and language tasks.
Related papers
- GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge [4.839514405631815]
Federated learning (FL) has emerged as a privacy-preserving machine learning approach.
gradient inversion attacks can exploit the gradients of FL to recreate the original user data.
We propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN)
arXiv Detail & Related papers (2024-05-06T14:29:24Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - OASIS: Offsetting Active Reconstruction Attacks in Federated Learning [14.644814818768172]
Federated Learning (FL) has garnered significant attention for its potential to protect user privacy.
Recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks.
We propose a defense mechanism based on image augmentation that effectively counteracts active reconstruction attacks.
arXiv Detail & Related papers (2023-11-23T00:05:17Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Refiner: Data Refining against Gradient Leakage Attacks in Federated
Learning [28.76786159247595]
gradient leakage attacks exploit clients' uploaded gradients to reconstruct their sensitive data.
In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches.
We design Refiner that jointly optimize two metrics for privacy protection and performance maintenance.
arXiv Detail & Related papers (2022-12-05T05:36:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Evaluating Gradient Inversion Attacks and Defenses in Federated Learning [43.993693910541275]
This paper evaluates existing attacks and defenses against gradient inversion attacks.
We show the trade-offs of privacy leakage and data utility of three proposed defense mechanisms.
Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss.
arXiv Detail & Related papers (2021-11-30T19:34:16Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.