Refiner: Data Refining against Gradient Leakage Attacks in Federated
Learning
- URL: http://arxiv.org/abs/2212.02042v2
- Date: Mon, 16 Oct 2023 12:34:23 GMT
- Title: Refiner: Data Refining against Gradient Leakage Attacks in Federated
Learning
- Authors: Mingyuan Fan, Cen Chen, Chengyu Wang, Xiaodan Li, Wenmeng Zhou, Jun
Huang
- Abstract summary: gradient leakage attacks exploit clients' uploaded gradients to reconstruct their sensitive data.
In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches.
We design Refiner that jointly optimize two metrics for privacy protection and performance maintenance.
- Score: 28.76786159247595
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent works have brought attention to the vulnerability of Federated
Learning (FL) systems to gradient leakage attacks. Such attacks exploit
clients' uploaded gradients to reconstruct their sensitive data, thereby
compromising the privacy protection capability of FL. In response, various
defense mechanisms have been proposed to mitigate this threat by manipulating
the uploaded gradients. Unfortunately, empirical evaluations have demonstrated
limited resilience of these defenses against sophisticated attacks, indicating
an urgent need for more effective defenses. In this paper, we explore a novel
defensive paradigm that departs from conventional gradient perturbation
approaches and instead focuses on the construction of robust data. Intuitively,
if robust data exhibits low semantic similarity with clients' raw data, the
gradients associated with robust data can effectively obfuscate attackers. To
this end, we design Refiner that jointly optimizes two metrics for privacy
protection and performance maintenance. The utility metric is designed to
promote consistency between the gradients of key parameters associated with
robust data and those derived from clients' data, thus maintaining model
performance. Furthermore, the privacy metric guides the generation of robust
data towards enlarging the semantic gap with clients' data. Theoretical
analysis supports the effectiveness of Refiner, and empirical evaluations on
multiple benchmark datasets demonstrate the superior defense effectiveness of
Refiner at defending against state-of-the-art attacks.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning [31.374376311614675]
Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
arXiv Detail & Related papers (2022-10-19T20:41:30Z) - Concealing Sensitive Samples against Gradient Leakage in Federated
Learning [41.43099791763444]
Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server.
Recent studies expose the vulnerability of FL to model inversion attacks, where adversaries reconstruct users private data via eavesdropping on the shared gradient information.
We present a simple, yet effective defense strategy that obfuscates the gradients of the sensitive data with concealed samples.
arXiv Detail & Related papers (2022-09-13T04:19:35Z) - Dropout is NOT All You Need to Prevent Gradient Leakage [0.6021787236982659]
We analyze the effect of dropout on iterative gradient inversion attacks.
We propose a novel Inversion Attack (DIA) that jointly optimize for client data and dropout masks.
We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
arXiv Detail & Related papers (2022-08-12T08:29:44Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.