Explaining Network Intrusion Detection System Using Explainable AI
Framework
- URL: http://arxiv.org/abs/2103.07110v1
- Date: Fri, 12 Mar 2021 07:15:09 GMT
- Title: Explaining Network Intrusion Detection System Using Explainable AI
Framework
- Authors: Shraddha Mane, Dattaraj Rao
- Abstract summary: Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
- Score: 0.5076419064097734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cybersecurity is a domain where the data distribution is constantly changing
with attackers exploring newer patterns to attack cyber infrastructure.
Intrusion detection system is one of the important layers in cyber safety in
today's world. Machine learning based network intrusion detection systems
started showing effective results in recent years. With deep learning models,
detection rates of network intrusion detection system are improved. More
accurate the model, more the complexity and hence less the interpretability.
Deep neural networks are complex and hard to interpret which makes difficult to
use them in production as reasons behind their decisions are unknown. In this
paper, we have used deep neural network for network intrusion detection and
also proposed explainable AI framework to add transparency at every stage of
machine learning pipeline. This is done by leveraging Explainable AI algorithms
which focus on making ML models less of black boxes by providing explanations
as to why a prediction is made. Explanations give us measurable factors as to
what features influence the prediction of a cyberattack and to what degree.
These explanations are generated from SHAP, LIME, Contrastive Explanations
Method, ProtoDash and Boolean Decision Rules via Column Generation. We apply
these approaches to NSL KDD dataset for intrusion detection system and
demonstrate results.
Related papers
- Characterizing out-of-distribution generalization of neural networks: application to the disordered Su-Schrieffer-Heeger model [38.79241114146971]
We show how interpretability methods can increase trust in predictions of a neural network trained to classify quantum phases.
In particular, we show that we can ensure better out-of-distribution generalization in the complex classification problem.
This work is an example of how the systematic use of interpretability methods can improve the performance of NNs in scientific problems.
arXiv Detail & Related papers (2024-06-14T13:24:32Z) - Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation [41.831831628421675]
Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection.
We propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection.
arXiv Detail & Related papers (2024-04-24T06:52:53Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic
and Sub-Symbolic Methods [0.0]
We are exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge.
The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios.
Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.
arXiv Detail & Related papers (2022-12-23T09:03:51Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Sufficient Reasons for A Zero-Day Intrusion Detection Artificial Immune
System [40.31029890303761]
Interpretability and transparency of the machine learning model is the foundation of trust in AI-driven intrusion detection results.
This paper proposed a rigorous interpretable Artificial Intelligence driven intrusion detection approach, based on artificial immune system.
arXiv Detail & Related papers (2022-04-05T14:46:08Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Self-Supervised and Interpretable Anomaly Detection using Network
Transformers [1.0705399532413615]
This paper introduces the Network Transformer (NeT) model for anomaly detection.
NeT incorporates the graph structure of the communication network in order to improve interpretability.
The presented approach was tested by evaluating the successful detection of anomalies in an Industrial Control System.
arXiv Detail & Related papers (2022-02-25T22:05:59Z) - Intrusion detection in computer systems by using artificial neural
networks with Deep Learning approaches [0.0]
Intrusion detection into computer networks has become one of the most important issues in cybersecurity.
This paper focuses on the design and implementation of an intrusion detection system based on Deep Learning architectures.
arXiv Detail & Related papers (2020-12-15T19:12:23Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.