Classification and Explanation of Distributed Denial-of-Service (DDoS)
Attack Detection using Machine Learning and Shapley Additive Explanation
(SHAP) Methods
- URL: http://arxiv.org/abs/2306.17190v1
- Date: Tue, 27 Jun 2023 04:51:29 GMT
- Title: Classification and Explanation of Distributed Denial-of-Service (DDoS)
Attack Detection using Machine Learning and Shapley Additive Explanation
(SHAP) Methods
- Authors: Yuanyuan Wei, Julian Jang-Jaccard, Amardeep Singh, Fariza Sabrina,
Seyit Camtepe
- Abstract summary: Distinguishing between legitimate traffic and malicious traffic is a challenging task.
An inter-model explanation implemented to classify a traffic flow whether is benign or malicious is an important investigation of the inner working theory of the model.
We propose a framework that can not only classify legitimate traffic and malicious traffic of DDoS attacks but also use SHAP to explain the decision-making of the model.
- Score: 4.899818550820576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: DDoS attacks involve overwhelming a target system with a large number of
requests or traffic from multiple sources, disrupting the normal traffic of a
targeted server, service, or network. Distinguishing between legitimate traffic
and malicious traffic is a challenging task. It is possible to classify
legitimate traffic and malicious traffic and analysis the network traffic by
using machine learning and deep learning techniques. However, an inter-model
explanation implemented to classify a traffic flow whether is benign or
malicious is an important investigation of the inner working theory of the
model to increase the trustworthiness of the model. Explainable Artificial
Intelligence (XAI) can explain the decision-making of the machine learning
models that can be classified and identify DDoS traffic. In this context, we
proposed a framework that can not only classify legitimate traffic and
malicious traffic of DDoS attacks but also use SHAP to explain the
decision-making of the classifier model. To address this concern, we first
adopt feature selection techniques to select the top 20 important features
based on feature importance techniques (e.g., XGB-based SHAP feature
importance). Following that, the Multi-layer Perceptron Network (MLP) part of
our proposed model uses the optimized features of the DDoS attack dataset as
inputs to classify legitimate and malicious traffic. We perform extensive
experiments with all features and selected features. The evaluation results
show that the model performance with selected features achieves above 99\%
accuracy. Finally, to provide interpretability, XAI can be adopted to explain
the model performance between the prediction results and features based on
global and local explanations by SHAP, which can better explain the results
achieved by our proposed framework.
Related papers
- Lens: A Foundation Model for Network Traffic [19.3652490585798]
Lens is a foundation model for network traffic that leverages the T5 architecture to learn the pre-trained representations from large-scale unlabeled data.
We design a novel loss that combines three distinct tasks: Masked Span Prediction (MSP), Packet Order Prediction (POP), and Homologous Traffic Prediction (HTP)
arXiv Detail & Related papers (2024-02-06T02:45:13Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - An Explainable Ensemble-based Intrusion Detection System for Software-Defined Vehicle Ad-hoc Networks [0.0]
In this study, we explore the detection of cyber threats in vehicle networks through ensemble-based machine learning.
We propose a model that uses Random Forest and CatBoost as our main investigators, with Logistic Regression used to then reason on their outputs to make a final decision.
We observe that our approach improves classification accuracy, and results in fewer misclassifications compared to previous works.
arXiv Detail & Related papers (2023-12-08T10:39:18Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - TraffNet: Learning Causality of Traffic Generation for What-if Prediction [4.604622556490027]
Real-time what-if traffic prediction is crucial for decision making in intelligent traffic management and control.
Here, we present a simple deep learning framework called TraffNet that learns the mechanisms of traffic generation for what-if pre-diction.
arXiv Detail & Related papers (2023-03-28T13:12:17Z) - Utilizing Background Knowledge for Robust Reasoning over Traffic
Situations [63.45021731775964]
We focus on a complementary research aspect of Intelligent Transportation: traffic understanding.
We scope our study to text-based methods and datasets given the abundant commonsense knowledge.
We adopt three knowledge-driven approaches for zero-shot QA over traffic situations.
arXiv Detail & Related papers (2022-12-04T09:17:24Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Transferable Perturbations of Deep Feature Distributions [102.94094966908916]
This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions.
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.
arXiv Detail & Related papers (2020-04-27T00:32:25Z) - DeepMAL -- Deep Learning Models for Malware Traffic Detection and
Classification [4.187494796512101]
We introduce DeepMAL, a DL model which is able to capture the underlying statistics of malicious traffic.
We show that DeepMAL can detect and classify malware flows with high accuracy, outperforming traditional, shallow-like models.
arXiv Detail & Related papers (2020-03-03T16:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.