FedDef: Defense Against Gradient Leakage in Federated Learning-based
Network Intrusion Detection Systems
- URL: http://arxiv.org/abs/2210.04052v3
- Date: Wed, 2 Aug 2023 07:36:09 GMT
- Title: FedDef: Defense Against Gradient Leakage in Federated Learning-based
Network Intrusion Detection Systems
- Authors: Jiahui Chen, Yi Zhao, Qi Li, Xuewei Feng, Ke Xu
- Abstract summary: We propose two privacy evaluation metrics designed for FL-based NIDSs.
We propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee.
We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection.
- Score: 15.39058389031301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) methods have been widely applied to anomaly-based network
intrusion detection system (NIDS) to detect malicious traffic. To expand the
usage scenarios of DL-based methods, federated learning (FL) allows multiple
users to train a global model on the basis of respecting individual data
privacy. However, it has not yet been systematically evaluated how robust
FL-based NIDSs are against existing privacy attacks under existing defenses. To
address this issue, we propose two privacy evaluation metrics designed for
FL-based NIDSs, including (1) privacy score that evaluates the similarity
between the original and recovered traffic features using reconstruction
attacks, and (2) evasion rate against NIDSs using adversarial attack with the
recovered traffic. We conduct experiments to illustrate that existing defenses
provide little protection and the corresponding adversarial traffic can even
evade the SOTA NIDS Kitsune. To defend against such attacks and build a more
robust FL-based NIDS, we further propose FedDef, a novel optimization-based
input perturbation defense strategy with theoretical guarantee. It achieves
both high utility by minimizing the gradient distance and strong privacy
protection by maximizing the input distance. We experimentally evaluate four
existing defenses on four datasets and show that our defense outperforms all
the baselines in terms of privacy protection with up to 7 times higher privacy
score, while maintaining model accuracy loss within 3% under optimal parameter
combination.
Related papers
- Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - PROFL: A Privacy-Preserving Federated Learning Method with Stringent
Defense Against Poisoning Attacks [2.6487166137163007]
Federated Learning (FL) faces two major issues: privacy leakage and poisoning attacks.
We propose a novel privacy-preserving Byzantine-robust FL framework PROFL.
PROFL is based on the two-trapdoor additional homomorphic encryption algorithm and blinding techniques.
arXiv Detail & Related papers (2023-12-02T06:34:37Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - RecUP-FL: Reconciling Utility and Privacy in Federated Learning via
User-configurable Privacy Defense [9.806681555309519]
Federated learning (FL) allows clients to collaboratively train a model without sharing their private data.
Recent studies have shown that private information can still be leaked through shared gradients.
We propose a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes.
arXiv Detail & Related papers (2023-04-11T10:59:45Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective [47.23145404191034]
Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
arXiv Detail & Related papers (2020-12-08T20:42:12Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
Privacy Attacks [31.34410250008759]
This paper measures the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks.
Experiments show that model accuracies are improved on average by 5-20% compared with baseline mechanisms.
arXiv Detail & Related papers (2020-06-20T15:48:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.