Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection
- URL: http://arxiv.org/abs/2309.11053v1
- Date: Wed, 20 Sep 2023 04:14:48 GMT
- Title: Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection
- Authors: Tran Duc Luong, Vuong Minh Tien, Nguyen Huu Quyen, Do Thi Thu Hien, Phan The Duy, Van-Hau Pham,
- Abstract summary: In cybersecurity, the sensitive data along with the contextual information and high-quality labeling play an essential role.
In this paper, we investigate a novel robust aggregation method for federated learning, namely Fed-LSAE, which takes advantage of latent space representation.
The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The significant rise of security concerns in conventional centralized learning has promoted federated learning (FL) adoption in building intelligent applications without privacy breaches. In cybersecurity, the sensitive data along with the contextual information and high-quality labeling in each enterprise organization play an essential role in constructing high-performance machine learning (ML) models for detecting cyber threats. Nonetheless, the risks coming from poisoning internal adversaries against FL systems have raised discussions about designing robust anti-poisoning frameworks. Whereas defensive mechanisms in the past were based on outlier detection, recent approaches tend to be more concerned with latent space representation. In this paper, we investigate a novel robust aggregation method for FL, namely Fed-LSAE, which takes advantage of latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust FL-based threat detector in the context of IoT. More specifically, the FL evaluation witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method [7.842334649864372]
Internet of Things (IoT) devices across multiple sectors has escalated serious network security concerns.
Traditional Machine Learning (ML)-based Intrusion Detection Systems (IDSs) for cyber-attack classification require data transmission from IoT devices to a centralized server for traffic analysis, raising severe privacy concerns.
We introduce FedMADE, a novel dynamic aggregation method, which clusters devices by their traffic patterns and aggregates local models based on their contributions towards overall performance.
arXiv Detail & Related papers (2024-08-13T18:42:34Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution [8.92716309877259]
Federated Learning (FL) and Local Differential Privacy (LDP) have attracted much attention over the past few years.
They share the common limitation of being vulnerable to poisoning attacks.
We propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution.
arXiv Detail & Related papers (2024-04-10T04:18:26Z) - Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - HBFL: A Hierarchical Blockchain-based Federated Learning Framework for a
Collaborative IoT Intrusion Detection [0.0]
We propose a hierarchical blockchain-based federated learning framework to enable secure and privacy-preserved collaborative IoT intrusion detection.
The proposed ML-based intrusion detection framework follows a hierarchical federated learning architecture to ensure the privacy of the learning process and organisational data.
The outcome is a securely designed ML-based intrusion detection system capable of detecting a wide range of malicious activities while preserving data privacy.
arXiv Detail & Related papers (2022-04-08T19:06:16Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.