Local and Central Differential Privacy for Robustness and Privacy in
Federated Learning
- URL: http://arxiv.org/abs/2009.03561v5
- Date: Fri, 27 May 2022 12:03:31 GMT
- Title: Local and Central Differential Privacy for Robustness and Privacy in
Federated Learning
- Authors: Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro
- Abstract summary: Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates.
This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.
- Score: 13.115388879531967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) allows multiple participants to train machine
learning models collaboratively by keeping their datasets local while only
exchanging model updates. Alas, this is not necessarily free from privacy and
robustness vulnerabilities, e.g., via membership, property, and backdoor
attacks. This paper investigates whether and to what extent one can use
differential Privacy (DP) to protect both privacy and robustness in FL. To this
end, we present a first-of-its-kind evaluation of Local and Central
Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility
and effectiveness. Our experiments show that both DP variants do d fend against
backdoor attacks, albeit with varying levels of protection-utility trade-offs,
but anyway more effectively than other robustness defenses. DP also mitigates
white-box membership inference attacks in FL, and our work is the first to show
it empirically. Neither LDP nor CDP, however, defend against property
inference. Overall, our work provides a comprehensive, re-usable measurement
methodology to quantify the trade-offs between robustness/privacy and utility
in differentially private FL.
Related papers
- Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Efficient Vertical Federated Learning with Secure Aggregation [10.295508659999783]
We present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation.
We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 3.8e4 speedup compared to homomorphic encryption (HE)
arXiv Detail & Related papers (2023-05-18T18:08:36Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Understanding the Interplay between Privacy and Robustness in Federated
Learning [15.673448030003788]
Federated Learning (FL) is emerging as a promising paradigm of privacy-preserving machine learning.
Recent works highlighted several privacy and robustness weaknesses in FL.
It is still not clear how LDP affects adversarial robustness in FL.
arXiv Detail & Related papers (2021-06-13T16:01:35Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.