A Novel Deep Learning based Model to Defend Network Intrusion Detection
System against Adversarial Attacks
- URL: http://arxiv.org/abs/2308.00077v1
- Date: Mon, 31 Jul 2023 18:48:39 GMT
- Title: A Novel Deep Learning based Model to Defend Network Intrusion Detection
System against Adversarial Attacks
- Authors: Khushnaseeb Roshan, Aasim Zafar, Shiekh Burhan Ul Haque
- Abstract summary: The main aim of this research work is to study powerful adversarial attack algorithms and their defence method on DL-based NIDS.
As a defence method, Adversarial Training is used to increase the robustness of the NIDS model.
The results are summarized in three phases, i.e., 1) before the adversarial attack, 2) after the adversarial attack, and 3) after the adversarial defence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network Intrusion Detection System (NIDS) is an essential tool in securing
cyberspace from a variety of security risks and unknown cyberattacks. A number
of solutions have been implemented for Machine Learning (ML), and Deep Learning
(DL) based NIDS. However, all these solutions are vulnerable to adversarial
attacks, in which the malicious actor tries to evade or fool the model by
injecting adversarial perturbed examples into the system. The main aim of this
research work is to study powerful adversarial attack algorithms and their
defence method on DL-based NIDS. Fast Gradient Sign Method (FGSM), Jacobian
Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini &
Wagner (C&W) are four powerful adversarial attack methods implemented against
the NIDS. As a defence method, Adversarial Training is used to increase the
robustness of the NIDS model. The results are summarized in three phases, i.e.,
1) before the adversarial attack, 2) after the adversarial attack, and 3) after
the adversarial defence. The Canadian Institute for Cybersecurity Intrusion
Detection System 2017 (CICIDS-2017) dataset is used for evaluation purposes
with various performance measurements like f1-score, accuracy etc.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats [3.560574387648533]
We propose a one-class classification-driven IDS system structured on two tiers.
The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown.
This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks.
arXiv Detail & Related papers (2024-03-17T12:26:30Z) - Untargeted White-box Adversarial Attack with Heuristic Defence Methods
in Real-time Deep Learning based Network Intrusion Detection System [0.0]
In Adversarial Machine Learning (AML), malicious actors aim to fool the Machine Learning (ML) and Deep Learning (DL) models to produce incorrect predictions.
AML is an emerging research domain, and it has become a necessity for the in-depth study of adversarial attacks.
We implement four powerful adversarial attack techniques, namely, Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) in NIDS.
arXiv Detail & Related papers (2023-10-05T06:32:56Z) - Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification [0.0]
The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
arXiv Detail & Related papers (2023-01-30T18:00:28Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - ExAD: An Ensemble Approach for Explanation-based Adversarial Detection [17.455233006559734]
We propose ExAD, a framework to detect adversarial examples using an ensemble of explanation techniques.
We evaluate our approach using six state-of-the-art adversarial attacks on three image datasets.
arXiv Detail & Related papers (2021-03-22T00:53:07Z) - Cyber Intrusion Detection by Using Deep Neural Networks with
Attack-sharing Loss [10.240568633711817]
Cyber attacks pose crucial threats to computer system security, and put digital treasuries at excessive risks.
It is challenging to classify the intrusion events due to the wide variety of attacks.
DeepIDEA takes full advantage of deep learning to enable intrusion detection and classification.
arXiv Detail & Related papers (2021-03-17T15:15:12Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.