Disentangled Dynamic Intrusion Detection
        - URL: http://arxiv.org/abs/2307.11079v2
 - Date: Sat, 14 Dec 2024 09:12:39 GMT
 - Title: Disentangled Dynamic Intrusion Detection
 - Authors: Chenyang Qiu, Guoshun Nan, Hongrui Xia, Zheng Weng, Xueting Wang, Meng Shen, Xiaofeng Tao, Jun Liu, 
 - Abstract summary: We propose DIDS-MFL, a disentangled intrusion detection method to handle various intrusion detection scenarios.<n>DIDS-MFL involves two key components: a double Disentanglementbased Intrusion Detection System (DIDS) and a plug-and-play Multi-scale Few-shot Learning-based (MFL) intrusion detection module.
 - Score: 17.155894470599762
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Network-based intrusion detection system (NIDS) monitors network traffic for malicious activities, forming the frontline defense against increasing attacks over information infrastructures. Although promising, our quantitative analysis shows that existing methods perform inconsistently in declaring various attacks, and perform poorly in few-shot intrusion detections. We reveal that the underlying cause is entangled distributions of flow features. This motivates us to propose DIDS-MFL, a disentangled intrusion detection method to handle various intrusion detection scenarios. DIDS-MFL involves two key components, respectively: a double Disentanglementbased Intrusion Detection System (DIDS) and a plug-and-play Multi-scale Few-shot Learning-based (MFL) intrusion detection module. Specifically, the proposed DIDS first disentangles traffic features by a non-parameterized optimization, automatically differentiating tens and hundreds of complex features of various attacks. Such differentiated features will be further disentangled to highlight the attack-specific features. Our DIDS additionally uses a novel graph diffusion method that dynamically fuses the network topology in evolving data streams. Furthermore, the proposed MFL involves an alternating optimization framework to address the entangled representations in few-shot traffic threats with rigorous derivation. MFL first captures multiscale information in latent space to distinguish attack-specific information and then optimizes the disentanglement term to highlight the attack-specific information. Finally, MFL fuses and alternately solves them in an end-to-end way. Experiments show the superiority of our proposed DIDS-MFL. Our code is available at https://github.com/qcydm/DIDS-MFL 
 
       
      
        Related papers
        - Topology-aware Detection and Localization of Distributed   Denial-of-Service Attacks in Network-on-Chips [2.6490401904186758]
This paper presents a framework to conduct topology-aware detection and localization of DDoS attacks using Graph Neural Networks (GNNs)<n>By modeling the NoC as a graph, our method utilizes traffic features to effectively identify and localize DDoS attacks.<n> Experimental results demonstrate that our approach can detect and localize DDoS attacks with high accuracy (up to 99%) while maintaining consistent performance under diverse attack strategies.
arXiv  Detail & Related papers  (2025-05-20T20:49:34Z) - Dynamic Attention Analysis for Backdoor Detection in Text-to-Image   Diffusion Models [70.03122709795122]
Previous backdoor detection methods primarily focus on the static features of backdoor samples.<n>This study introduces a novel backdoor detection perspective named Dynamic Attention Analysis (DAA), showing that these dynamic characteristics serve as better indicators for backdoor detection.<n>Our approach significantly surpasses existing detection methods, achieving an average F1 Score of 79.49% and an AUC of 87.67%.
arXiv  Detail & Related papers  (2025-04-29T07:59:35Z) - DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks [101.52204404377039]
LLM-integrated applications and agents are vulnerable to prompt injection attacks.
A detection method aims to determine whether a given input is contaminated by an injected prompt.
We propose DataSentinel, a game-theoretic method to detect prompt injection attacks.
arXiv  Detail & Related papers  (2025-04-15T16:26:21Z) - Enhancing Network Security: A Hybrid Approach for Detection and   Mitigation of Distributed Denial-of-Service Attacks Using Machine Learning [0.0]
The distributed denial-of-service (DDoS) attack represents an advanced form of the denial-of-service (DoS) attack.<n>We propose a Hybrid Model to strengthen network security by combining the featureextraction abilities of 1D Convolutional Neural Networks (CNNs) with the classification skills of Random Forest (RF) and Multi-layer Perceptron (MLP)<n>We also integrate our machine learning model with Snort, which provides a robust and adaptive solution for detecting and mitigating various DDoS attacks.
arXiv  Detail & Related papers  (2025-03-07T14:47:56Z) - CND-IDS: Continual Novelty Detection for Intrusion Detection Systems [7.196884299359838]
Intrusion detection systems (IDS) play a crucial role in IoT and network security by monitoring system data and alerting to suspicious activities.
Machine learning (ML) has emerged as a promising solution for IDS, offering highly accurate intrusion detection.
We propose CND-IDS, a continual novelty detection IDS framework which consists of (i) a learning-based feature extractor that continuously updates new feature representations of the system data, and (ii) a novelty detector that identifies new cyber attacks by leveraging principal component analysis (PCA) reconstruction.
arXiv  Detail & Related papers  (2025-02-19T20:47:22Z) - Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv  Detail & Related papers  (2024-11-01T04:05:59Z) - Coarse-to-Fine Proposal Refinement Framework for Audio Temporal Forgery   Detection and Localization [60.899082019130766]
We introduce a frame-level detection network (FDN) and a proposal refinement network (PRN) for audio temporal forgery detection and localization.
FDN aims to mine informative inconsistency cues between real and fake frames to obtain discriminative features that are beneficial for roughly indicating forgery regions.
PRN is responsible for predicting confidence scores and regression offsets to refine the coarse-grained proposals derived from the FDN.
arXiv  Detail & Related papers  (2024-07-23T15:07:52Z) - Detection-Rate-Emphasized Multi-objective Evolutionary Feature Selection   for Network Intrusion Detection [21.104686670216445]
We propose DR-MOFS to model the feature selection problem in network intrusion detection as a three-objective optimization problem.
In most cases, the proposed method can outperform previous methods, i.e., lead to fewer features, higher accuracy and detection rate.
arXiv  Detail & Related papers  (2024-06-13T14:42:17Z) - Multi-stage Attack Detection and Prediction Using Graph Neural Networks:   An IoT Feasibility Study [2.5325901958283126]
This paper proposes a novel 3-stage intrusion detection system inspired by a simplified version of the Lockheed Martin cyber kill chain.
The proposed approach consists of three models, each responsible for detecting a group of attacks with common characteristics.
Using the ToN IoT dataset, we achieved an average of 94% F1-Score among different stages, outperforming the benchmark approaches.
arXiv  Detail & Related papers  (2024-04-28T22:11:24Z) - An incremental hybrid adaptive network-based IDS in Software Defined   Networks to detect stealth attacks [0.0]
Advanced Persistent Threats (APTs) are a type of attack that implement a wide range of strategies to evade detection.
Machine Learning (ML) techniques in Intrusion Detection Systems (IDSs) is widely used to detect such attacks but has a challenge when the data distribution changes.
An incremental hybrid adaptive Network Intrusion Detection System (NIDS) is proposed to tackle the issue of concept drift in SDN.
arXiv  Detail & Related papers  (2024-04-01T13:33:40Z) - Unified Physical-Digital Face Attack Detection [66.14645299430157]
Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks.
Previous related work rarely considers both situations at the same time.
We propose a Unified Attack Detection framework based on Vision-Language Models (VLMs)
arXiv  Detail & Related papers  (2024-01-31T09:38:44Z) - Exploring Highly Quantised Neural Networks for Intrusion Detection in
  Automotive CAN [13.581341206178525]
Machine learning-based intrusion detection models have been shown to successfully detect multiple targeted attack vectors.
In this paper, we present a case for custom-quantised literature (CQMLP) as a multi-class classification model.
We show that the 2-bit CQMLP model, when integrated as the IDS, can detect malicious attack messages with a very high accuracy of 99.9%.
arXiv  Detail & Related papers  (2024-01-19T21:11:02Z) - Investigating Human-Identifiable Features Hidden in Adversarial
  Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv  Detail & Related papers  (2023-09-28T22:31:29Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv  Detail & Related papers  (2023-09-11T16:10:12Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv  Detail & Related papers  (2023-06-06T11:44:42Z) - Spatial-Frequency Discriminability for Revealing Adversarial   Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv  Detail & Related papers  (2023-05-18T10:18:59Z) - TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion
  Attacks against Network Intrusion Detection Systems [0.7829352305480285]
We implement existing state-of-the-art models for intrusion detection.
We then attack those models with a set of chosen evasion attacks.
In an attempt to detect those adversarial attacks, we design and implement multiple transfer learning-based adversarial detectors.
arXiv  Detail & Related papers  (2022-10-27T18:02:58Z) - PSNet: Parallel Symmetric Network for Video Salient Object Detection [85.94443548452729]
We propose a VSOD network with up and down parallel symmetry, named PSNet.
Two parallel branches with different dominant modalities are set to achieve complete video saliency decoding.
arXiv  Detail & Related papers  (2022-10-12T04:11:48Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
  Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv  Detail & Related papers  (2022-06-14T04:55:11Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
  Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv  Detail & Related papers  (2022-02-20T17:41:02Z) - PointBA: Towards Backdoor Attacks in 3D Point Cloud [31.210502946247498]
We present the backdoor attacks in 3D with a unified framework that exploits the unique properties of 3D data and networks.
Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.
arXiv  Detail & Related papers  (2021-03-30T04:49:25Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
  MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv  Detail & Related papers  (2021-01-28T16:18:19Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
 GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv  Detail & Related papers  (2020-06-21T19:45:30Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv  Detail & Related papers  (2020-06-08T20:42:39Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.