Multi-stage Attack Detection and Prediction Using Graph Neural Networks: An IoT Feasibility Study
- URL: http://arxiv.org/abs/2404.18328v1
- Date: Sun, 28 Apr 2024 22:11:24 GMT
- Title: Multi-stage Attack Detection and Prediction Using Graph Neural Networks: An IoT Feasibility Study
- Authors: Hamdi Friji, Ioannis Mavromatis, Adrian Sanchez-Mompo, Pietro Carnelli, Alexis Olivereau, Aftab Khan,
- Abstract summary: This paper proposes a novel 3-stage intrusion detection system inspired by a simplified version of the Lockheed Martin cyber kill chain.
The proposed approach consists of three models, each responsible for detecting a group of attacks with common characteristics.
Using the ToN IoT dataset, we achieved an average of 94% F1-Score among different stages, outperforming the benchmark approaches.
- Score: 2.5325901958283126
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the ever-increasing reliance on digital networks for various aspects of modern life, ensuring their security has become a critical challenge. Intrusion Detection Systems play a crucial role in ensuring network security, actively identifying and mitigating malicious behaviours. However, the relentless advancement of cyber-threats has rendered traditional/classical approaches insufficient in addressing the sophistication and complexity of attacks. This paper proposes a novel 3-stage intrusion detection system inspired by a simplified version of the Lockheed Martin cyber kill chain to detect advanced multi-step attacks. The proposed approach consists of three models, each responsible for detecting a group of attacks with common characteristics. The detection outcome of the first two stages is used to conduct a feasibility study on the possibility of predicting attacks in the third stage. Using the ToN IoT dataset, we achieved an average of 94% F1-Score among different stages, outperforming the benchmark approaches based on Random-forest model. Finally, we comment on the feasibility of this approach to be integrated in a real-world system and propose various possible future work.
Related papers
- Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Detecting Unknown Attacks in IoT Environments: An Open Set Classifier
for Enhanced Network Intrusion Detection [5.787704156827843]
In this paper, we introduce a framework aimed at mitigating the open set recognition (OSR) problem in the realm of Network Intrusion Detection Systems (NIDS) tailored for IoT environments.
Our framework capitalizes on image-based representations of packet-level data, extracting spatial and temporal patterns from network traffic.
The empirical results prominently underscore the framework's efficacy, boasting an impressive 88% detection rate for previously unseen attacks.
arXiv Detail & Related papers (2023-09-14T06:41:45Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - A Hybrid Deep Learning Anomaly Detection Framework for Intrusion
Detection [4.718295605140562]
We propose a three-stage deep learning anomaly detection based network intrusion attack detection framework.
The framework comprises an integration of unsupervised (K-means clustering), semi-supervised (GANomaly) and supervised learning (CNN) algorithms.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2022-12-02T04:40:54Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.