Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective
- URL: http://arxiv.org/abs/2012.06043v1
- Date: Tue, 8 Dec 2020 20:42:12 GMT
- Title: Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective
- Authors: Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
- Abstract summary: Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
- Score: 47.23145404191034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a popular distributed learning framework that can
reduce privacy risks by not explicitly sharing private data. However, recent
works demonstrated that sharing model updates makes FL vulnerable to inference
attacks. In this work, we show our key observation that the data representation
leakage from gradients is the essential cause of privacy leakage in FL. We also
provide an analysis of this observation to explain how the data presentation is
leaked. Based on this observation, we propose a defense against model inversion
attack in FL. The key idea of our defense is learning to perturb data
representation such that the quality of the reconstructed data is severely
degraded, while FL performance is maintained. In addition, we derive certified
robustness guarantee to FL and convergence guarantee to FedAvg, after applying
our defense. To evaluate our defense, we conduct experiments on MNIST and
CIFAR10 for defending against the DLG attack and GS attack. Without sacrificing
accuracy, the results demonstrate that our proposed defense can increase the
mean squared error between the reconstructed data and the raw data by as much
as more than 160X for both DLG attack and GS attack, compared with baseline
defense methods. The privacy of the FL system is significantly improved.
Related papers
- SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Refiner: Data Refining against Gradient Leakage Attacks in Federated
Learning [28.76786159247595]
gradient leakage attacks exploit clients' uploaded gradients to reconstruct their sensitive data.
In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches.
We design Refiner that jointly optimize two metrics for privacy protection and performance maintenance.
arXiv Detail & Related papers (2022-12-05T05:36:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FedDef: Defense Against Gradient Leakage in Federated Learning-based
Network Intrusion Detection Systems [15.39058389031301]
We propose two privacy evaluation metrics designed for FL-based NIDSs.
We propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee.
We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection.
arXiv Detail & Related papers (2022-10-08T15:23:30Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.