HPAC-IDS: A Hierarchical Packet Attention Convolution for Intrusion Detection System
- URL: http://arxiv.org/abs/2501.06264v1
- Date: Thu, 09 Jan 2025 15:24:49 GMT
- Title: HPAC-IDS: A Hierarchical Packet Attention Convolution for Intrusion Detection System
- Authors: Anass Grini, Btissam El Khamlichi, Abdellatif El Afia, Amal El Fallah-Seghrouchni,
- Abstract summary: This research introduces a robust detection system against malicious network traffic, leveraging hierarchical structures and self-attention mechanisms.
The proposed system includes a Packet Segmenter that divides a given raw network packet into fixed-size segments that are fed to the HPAC-IDS.
Experiments performed on CIC-IDS 2017 dataset show that the system exhibits high accuracy and low false positive rates.
- Score: 0.8437187555622164
- License:
- Abstract: This research introduces a robust detection system against malicious network traffic, leveraging hierarchical structures and self-attention mechanisms. The proposed system includes a Packet Segmenter that divides a given raw network packet into fixed-size segments that are fed to the HPAC-IDS. The experiments performed on CIC-IDS2017 dataset show that the system exhibits high accuracy and low false positive rates while demonstrating resilience against diverse adversarial methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Wasserstein GAN (WGAN). The model's ability to withstand adversarial perturbations is attributed to the fusion of hierarchical attention mechanisms and convolutional neural networks, resulting in a 0% to 10% adversarial attack severity under tested adversarial attacks with different segment sizes, surpassing the state-of-the-art model in detection performance and adversarial attack robustness.
Related papers
- Pulling Back the Curtain: Unsupervised Adversarial Detection via Contrastive Auxiliary Networks [0.0]
We propose an Unsupervised adversarial detection via Auxiliary Contrastive Networks (U-CAN) to uncover adversarial behavior within auxiliary feature representations.
Our method surpasses existing unsupervised adversarial detection techniques, achieving superior F1 scores against four distinct attack methods.
arXiv Detail & Related papers (2025-02-13T09:40:26Z) - Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers [1.6001193161043425]
Botnets are computer networks controlled by malicious actors that present significant cybersecurity challenges.
This research addresses the sophisticated adversarial manipulations posed by attackers, aiming to undermine machine learning-based botnet detection systems.
We introduce a flow-based detection approach, leveraging machine learning and deep learning algorithms trained on the ISCX and ISOT datasets.
arXiv Detail & Related papers (2024-09-01T08:53:21Z) - Novelty Detection in Network Traffic: Using Survival Analysis for
Feature Identification [1.933681537640272]
Intrusion Detection Systems are an important component of many organizations' cyber defense and resiliency strategies.
One downside of these systems is their reliance on known attack signatures for detection of malicious network events.
We introduce an unconventional approach to identifying network traffic features that influence novelty detection based on survival analysis techniques.
arXiv Detail & Related papers (2023-01-16T01:40:29Z) - Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral
Defenders [15.839321488352535]
We propose a masked spatial-spectral autoencoder (MSSA) for enhancing the robustness of HSI analysis systems.
To improve the defense transferability and address the problem of limited labelled samples, MSSA employs spectra reconstruction as a pretext task.
Comprehensive experiments over three benchmarks verify the effectiveness of MSSA in comparison with the state-of-the-art hyperspectral classification methods.
arXiv Detail & Related papers (2022-07-16T01:33:13Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Understanding and Diagnosing Vulnerability under Adversarial Attacks [62.661498155101654]
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks.
We propose a novel interpretability method, InterpretGAN, to generate explanations for features used for classification in latent variables.
We also design the first diagnostic method to quantify the vulnerability contributed by each layer.
arXiv Detail & Related papers (2020-07-17T01:56:28Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.