Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks
- URL: http://arxiv.org/abs/2209.04030v3
- Date: Sat, 30 Dec 2023 16:17:36 GMT
- Title: Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks
- Authors: Chulin Xie, Yunhui Long, Pin-Yu Chen, Qinbin Li, Arash Nourian, Sanmi
Koyejo, Bo Li
- Abstract summary: Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
- Score: 68.20436971825941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) provides an efficient paradigm to jointly train a
global model leveraging data from distributed users. As local training data
comes from different users who may not be trustworthy, several studies have
shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the
privacy of local users, FL is usually trained in a differentially private way
(DPFL). Thus, in this paper, we ask: What are the underlying connections
between differential privacy and certified robustness in FL against poisoning
attacks? Can we leverage the innate privacy property of DPFL to provide
certified robustness for FL? Can we further improve the privacy of FL to
improve such robustness certification? We first investigate both user-level and
instance-level privacy of FL and provide formal privacy analysis to achieve
improved instance-level privacy. We then provide two robustness certification
criteria: certified prediction and certified attack inefficacy for DPFL on both
user and instance levels. Theoretically, we provide the certified robustness of
DPFL based on both criteria given a bounded number of adversarial users or
instances. Empirically, we conduct extensive experiments to verify our theories
under a range of poisoning attacks on different datasets. We find that
increasing the level of privacy protection in DPFL results in stronger
certified attack inefficacy; however, it does not necessarily lead to a
stronger certified prediction. Thus, achieving the optimal certified prediction
requires a proper balance between privacy and utility loss.
Related papers
- TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - Re-Evaluating Privacy in Centralized and Decentralized Learning: An Information-Theoretical and Empirical Study [4.7773230870500605]
Decentralized Federated Learning (DFL) has garnered attention for its robustness and scalability.
Recent work by Pasquini et. al. challenges this view, demonstrating that DFL does not inherently improve privacy against empirical attacks.
arXiv Detail & Related papers (2024-09-21T23:05:50Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective [47.23145404191034]
Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
arXiv Detail & Related papers (2020-12-08T20:42:12Z) - Local and Central Differential Privacy for Robustness and Privacy in
Federated Learning [13.115388879531967]
Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates.
This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.
arXiv Detail & Related papers (2020-09-08T07:37:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.