Sufficient Reasons for A Zero-Day Intrusion Detection Artificial Immune
System
- URL: http://arxiv.org/abs/2204.02255v1
- Date: Tue, 5 Apr 2022 14:46:08 GMT
- Title: Sufficient Reasons for A Zero-Day Intrusion Detection Artificial Immune
System
- Authors: Qianru Zhou, Rongzhen Li, Lei Xu, Arumugam Nallanathan, Jian Yanga,
Anmin Fu
- Abstract summary: Interpretability and transparency of the machine learning model is the foundation of trust in AI-driven intrusion detection results.
This paper proposed a rigorous interpretable Artificial Intelligence driven intrusion detection approach, based on artificial immune system.
- Score: 40.31029890303761
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Internet is the most complex machine humankind has ever built, and how to
defense it from intrusions is even more complex. With the ever increasing of
new intrusions, intrusion detection task rely on Artificial Intelligence more
and more. Interpretability and transparency of the machine learning model is
the foundation of trust in AI-driven intrusion detection results. Current
interpretation Artificial Intelligence technologies in intrusion detection are
heuristic, which is neither accurate nor sufficient. This paper proposed a
rigorous interpretable Artificial Intelligence driven intrusion detection
approach, based on artificial immune system. Details of rigorous interpretation
calculation process for a decision tree model is presented. Prime implicant
explanation for benign traffic flow are given in detail as rule for negative
selection of the cyber immune system. Experiments are carried out in real-life
traffic.
Related papers
- Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Artificial Intelligence Enables Real-Time and Intuitive Control of
Prostheses via Nerve Interface [25.870454492249863]
The next generation prosthetic hand that moves and feels like a real hand requires a robust neural interconnection between the human minds and machines.
Here we present a neuroprosthetic system to demonstrate that principle by employing an artificial intelligence (AI) agent to translate the amputee's movement intent through a peripheral nerve interface.
arXiv Detail & Related papers (2022-03-16T14:33:38Z) - A Hybrid Approach for an Interpretable and Explainable Intrusion
Detection System [0.5872014229110213]
This work proposes an interpretable and explainable hybrid intrusion detection system.
The system combines experts' written rules and dynamic knowledge continuously generated by a decision tree algorithm.
arXiv Detail & Related papers (2021-11-19T15:39:28Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Explaining Network Intrusion Detection System Using Explainable AI
Framework [0.5076419064097734]
Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
arXiv Detail & Related papers (2021-03-12T07:15:09Z) - Self-explaining AI as an alternative to interpretable AI [0.0]
Double descent indicates that deep neural networks operate by smoothly interpolating between data points.
Neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate.
Self-explaining AIs are capable of providing a human-understandable explanation along with confidence levels for both the decision and explanation.
arXiv Detail & Related papers (2020-02-12T18:50:11Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.