Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic
and Sub-Symbolic Methods
- URL: http://arxiv.org/abs/2212.13991v1
- Date: Fri, 23 Dec 2022 09:03:51 GMT
- Title: Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic
and Sub-Symbolic Methods
- Authors: Anna Himmelhuber, Dominik Dold, Stephan Grimm, Sonja Zillner, Thomas
Runkler
- Abstract summary: We are exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge.
The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios.
Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) on graph-structured data has recently received deepened
interest in the context of intrusion detection in the cybersecurity domain. Due
to the increasing amounts of data generated by monitoring tools as well as more
and more sophisticated attacks, these ML methods are gaining traction.
Knowledge graphs and their corresponding learning techniques such as Graph
Neural Networks (GNNs) with their ability to seamlessly integrate data from
multiple domains using human-understandable vocabularies, are finding
application in the cybersecurity domain. However, similar to other
connectionist models, GNNs are lacking transparency in their decision making.
This is especially important as there tend to be a high number of false
positive alerts in the cybersecurity domain, such that triage needs to be done
by domain experts, requiring a lot of man power. Therefore, we are addressing
Explainable AI (XAI) for GNNs to enhance trust management by exploring
combining symbolic and sub-symbolic methods in the area of cybersecurity that
incorporate domain knowledge. We experimented with this approach by generating
explanations in an industrial demonstrator system. The proposed method is shown
to produce intuitive explanations for alerts for a diverse range of scenarios.
Not only do the explanations provide deeper insights into the alerts, but they
also lead to a reduction of false positive alerts by 66% and by 93% when
including the fidelity metric.
Related papers
- Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Machine learning on knowledge graphs for context-aware security
monitoring [0.0]
We discuss the application of machine learning on knowledge graphs for intrusion detection.
We experimentally evaluate a link-prediction method for scoring anomalous activity in industrial systems.
The proposed method is shown to produce intuitively well-calibrated and interpretable alerts in a diverse range of scenarios.
arXiv Detail & Related papers (2021-05-18T18:00:19Z) - Explaining Network Intrusion Detection System Using Explainable AI
Framework [0.5076419064097734]
Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
arXiv Detail & Related papers (2021-03-12T07:15:09Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.