PROFL: A Privacy-Preserving Federated Learning Method with Stringent
Defense Against Poisoning Attacks
- URL: http://arxiv.org/abs/2312.01045v1
- Date: Sat, 2 Dec 2023 06:34:37 GMT
- Title: PROFL: A Privacy-Preserving Federated Learning Method with Stringent
Defense Against Poisoning Attacks
- Authors: Yisheng Zhong, Li-Ping Wang
- Abstract summary: Federated Learning (FL) faces two major issues: privacy leakage and poisoning attacks.
We propose a novel privacy-preserving Byzantine-robust FL framework PROFL.
PROFL is based on the two-trapdoor additional homomorphic encryption algorithm and blinding techniques.
- Score: 2.6487166137163007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) faces two major issues: privacy leakage and poisoning
attacks, which may seriously undermine the reliability and security of the
system. Overcoming them simultaneously poses a great challenge. This is because
privacy protection policies prohibit access to users' local gradients to avoid
privacy leakage, while Byzantine-robust methods necessitate access to these
gradients to defend against poisoning attacks. To address these problems, we
propose a novel privacy-preserving Byzantine-robust FL framework PROFL. PROFL
is based on the two-trapdoor additional homomorphic encryption algorithm and
blinding techniques to ensure the data privacy of the entire FL process. During
the defense process, PROFL first utilize secure Multi-Krum algorithm to remove
malicious gradients at the user level. Then, according to the Pauta criterion,
we innovatively propose a statistic-based privacy-preserving defense algorithm
to eliminate outlier interference at the feature level and resist impersonation
poisoning attacks with stronger concealment. Detailed theoretical analysis
proves the security and efficiency of the proposed method. We conducted
extensive experiments on two benchmark datasets, and PROFL improved accuracy by
39% to 75% across different attack settings compared to similar
privacy-preserving robust methods, demonstrating its significant advantage in
robustness.
Related papers
- TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FedDef: Defense Against Gradient Leakage in Federated Learning-based
Network Intrusion Detection Systems [15.39058389031301]
We propose two privacy evaluation metrics designed for FL-based NIDSs.
We propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee.
We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection.
arXiv Detail & Related papers (2022-10-08T15:23:30Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Byzantine-Robust and Privacy-Preserving Framework for FedML [10.124385820546014]
Federated learning has emerged as a popular paradigm for collaboratively training a model from data distributed among a set of clients.
This learning setting presents two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.
We propose a two-pronged solution that aims to address both challenges under a single framework.
arXiv Detail & Related papers (2021-05-05T19:36:21Z) - Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
Privacy Attacks [31.34410250008759]
This paper measures the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks.
Experiments show that model accuracies are improved on average by 5-20% compared with baseline mechanisms.
arXiv Detail & Related papers (2020-06-20T15:48:57Z) - An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning [82.80836918594231]
Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
arXiv Detail & Related papers (2020-02-23T06:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.