A Simple Framework to Enhance the Adversarial Robustness of Deep
Learning-based Intrusion Detection System
- URL: http://arxiv.org/abs/2312.03245v1
- Date: Wed, 6 Dec 2023 02:33:12 GMT
- Title: A Simple Framework to Enhance the Adversarial Robustness of Deep
Learning-based Intrusion Detection System
- Authors: Xinwei Yuan, Shu Han, Wei Huang, Hongliang Ye, Xianglong Kong and Fan
Zhang
- Abstract summary: We propose a novel IDS architecture that can enhance the robustness of IDS against adversarial attacks.
The proposed-IDS consists of three components: DL-based IDS, adversarial example detector, and ML-based IDS.
In our experiments, we observe a significant improvement in the prediction performance of the IDS when subjected to adversarial attack.
- Score: 5.189166936995511
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning based intrusion detection systems (DL-based IDS) have emerged
as one of the best choices for providing security solutions against various
network intrusion attacks. However, due to the emergence and development of
adversarial deep learning technologies, it becomes challenging for the adoption
of DL models into IDS. In this paper, we propose a novel IDS architecture that
can enhance the robustness of IDS against adversarial attacks by combining
conventional machine learning (ML) models and Deep Learning models. The
proposed DLL-IDS consists of three components: DL-based IDS, adversarial
example (AE) detector, and ML-based IDS. We first develop a novel AE detector
based on the local intrinsic dimensionality (LID). Then, we exploit the low
attack transferability between DL models and ML models to find a robust ML
model that can assist us in determining the maliciousness of AEs. If the input
traffic is detected as an AE, the ML-based IDS will predict the maliciousness
of input traffic, otherwise the DL-based IDS will work for the prediction. The
fusion mechanism can leverage the high prediction accuracy of DL models and low
attack transferability between DL models and ML models to improve the
robustness of the whole system. In our experiments, we observe a significant
improvement in the prediction performance of the IDS when subjected to
adversarial attack, achieving high accuracy with low resource consumption.
Related papers
- Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - Development of an Edge Resilient ML Ensemble to Tolerate ICS Adversarial Attacks [0.9437165725355702]
We build a resilient edge machine learning architecture that is designed to withstand adversarial attacks.
The reML is based on the Resilient DDDAS paradigm, Moving Target Defense (MTD) theory, and TinyML.
The proposed approach is power-efficient and privacy-preserving and, therefore, can be deployed on power-constrained devices to enhance ICS security.
arXiv Detail & Related papers (2024-09-26T19:37:37Z) - Extending Network Intrusion Detection with Enhanced Particle Swarm Optimization Techniques [0.0]
The present research investigates how to improve Network Intrusion Detection Systems (NIDS) by combining Machine Learning (ML) and Deep Learning (DL) techniques.
The study uses the CSE-CIC-IDS 2018 and LITNET-2020 datasets to compare ML methods (Decision Trees, Random Forest, XGBoost) and DL models (CNNs, RNNs, DNNs) against key performance metrics.
The Decision Tree model performed better across all measures after being fine-tuned with Enhanced Particle Swarm Optimization (EPSO), demonstrating the model's ability to detect network breaches effectively.
arXiv Detail & Related papers (2024-08-14T17:11:36Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - Intrusion Detection System with Machine Learning and Multiple Datasets [0.0]
In this paper, an enhanced intrusion detection system (IDS) that utilizes machine learning (ML) is explored.
Ultimately, this improved system can be used to combat the attacks made by unethical hackers.
arXiv Detail & Related papers (2023-12-04T14:58:19Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Reconstruction-based LSTM-Autoencoder for Anomaly-based DDoS Attack
Detection over Multivariate Time-Series Data [6.642599588462097]
A Distributed Denial-of-service (DDoS) attack is a malicious attempt to disrupt the regular traffic of a targeted server, service, or network by sending a flood of traffic to overwhelm the target or its surrounding infrastructure.
Traditional statistical and shallow machine learning techniques can detect superficial anomalies based on shallow data and feature selection, however, these approaches cannot detect unseen DDoS attacks.
We propose a reconstruction-based anomaly detection model named LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for detecting DDoS attack anomalies.
arXiv Detail & Related papers (2023-04-21T03:56:03Z) - Semantic Image Attack for Visual Model Diagnosis [80.36063332820568]
In practice, metric analysis on a specific train and test dataset does not guarantee reliable or fair ML models.
This paper proposes Semantic Image Attack (SIA), a method based on the adversarial attack that provides semantic adversarial images.
arXiv Detail & Related papers (2023-03-23T03:13:04Z) - A new interpretable unsupervised anomaly detection method based on
residual explanation [47.187609203210705]
We present RXP, a new interpretability method to deal with the limitations for AE-based AD in large-scale systems.
It stands out for its implementation simplicity, low computational cost and deterministic behavior.
In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP.
arXiv Detail & Related papers (2021-03-14T15:35:45Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.