3D-IDS: Doubly Disentangled Dynamic Intrusion Detection
- URL: http://arxiv.org/abs/2307.11079v1
- Date: Sun, 2 Jul 2023 00:26:26 GMT
- Title: 3D-IDS: Doubly Disentangled Dynamic Intrusion Detection
- Authors: Chenyang Qiu, Yingsheng Geng, Junrui Lu, Kaida Chen, Shitong Zhu, Ya
Su, Guoshun Nan, Can Zhang, Junsong Fu, Qimei Cui, Xiaofeng Tao
- Abstract summary: Network-based intrusion detection system (NIDS) monitors network traffic for malicious activities.
Existing methods perform inconsistently in declaring various unknown attacks or detecting diverse known attacks.
We propose 3D-IDS, a novel method that aims to tackle the above issues through two-step feature disentanglements and a dynamic graph diffusion scheme.
- Score: 24.293504468229678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network-based intrusion detection system (NIDS) monitors network traffic for
malicious activities, forming the frontline defense against increasing attacks
over information infrastructures. Although promising, our quantitative analysis
shows that existing methods perform inconsistently in declaring various unknown
attacks (e.g., 9% and 35% F1 respectively for two distinct unknown threats for
an SVM-based method) or detecting diverse known attacks (e.g., 31% F1 for the
Backdoor and 93% F1 for DDoS by a GCN-based state-of-the-art method), and
reveals that the underlying cause is entangled distributions of flow features.
This motivates us to propose 3D-IDS, a novel method that aims to tackle the
above issues through two-step feature disentanglements and a dynamic graph
diffusion scheme. Specifically, we first disentangle traffic features by a
non-parameterized optimization based on mutual information, automatically
differentiating tens and hundreds of complex features of various attacks. Such
differentiated features will be fed into a memory model to generate
representations, which are further disentangled to highlight the
attack-specific features. Finally, we use a novel graph diffusion method that
dynamically fuses the network topology for spatial-temporal aggregation in
evolving data streams. By doing so, we can effectively identify various attacks
in encrypted traffics, including unknown threats and known ones that are not
easily detected. Experiments show the superiority of our 3D-IDS. We also
demonstrate that our two-step feature disentanglements benefit the
explainability of NIDS.
Related papers
- Multi-stage Attack Detection and Prediction Using Graph Neural Networks: An IoT Feasibility Study [2.5325901958283126]
This paper proposes a novel 3-stage intrusion detection system inspired by a simplified version of the Lockheed Martin cyber kill chain.
The proposed approach consists of three models, each responsible for detecting a group of attacks with common characteristics.
Using the ToN IoT dataset, we achieved an average of 94% F1-Score among different stages, outperforming the benchmark approaches.
arXiv Detail & Related papers (2024-04-28T22:11:24Z) - An incremental hybrid adaptive network-based IDS in Software Defined Networks to detect stealth attacks [0.0]
Advanced Persistent Threats (APTs) are a type of attack that implement a wide range of strategies to evade detection.
Machine Learning (ML) techniques in Intrusion Detection Systems (IDSs) is widely used to detect such attacks but has a challenge when the data distribution changes.
An incremental hybrid adaptive Network Intrusion Detection System (NIDS) is proposed to tackle the issue of concept drift in SDN.
arXiv Detail & Related papers (2024-04-01T13:33:40Z) - Unified Physical-Digital Face Attack Detection [66.14645299430157]
Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks.
Previous related work rarely considers both situations at the same time.
We propose a Unified Attack Detection framework based on Vision-Language Models (VLMs)
arXiv Detail & Related papers (2024-01-31T09:38:44Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - PointBA: Towards Backdoor Attacks in 3D Point Cloud [31.210502946247498]
We present the backdoor attacks in 3D with a unified framework that exploits the unique properties of 3D data and networks.
Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.
arXiv Detail & Related papers (2021-03-30T04:49:25Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.