Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic
and Sub-Symbolic Methods
- URL: http://arxiv.org/abs/2212.13991v1
- Date: Fri, 23 Dec 2022 09:03:51 GMT
- Title: Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic
and Sub-Symbolic Methods
- Authors: Anna Himmelhuber, Dominik Dold, Stephan Grimm, Sonja Zillner, Thomas
Runkler
- Abstract summary: We are exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge.
The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios.
Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) on graph-structured data has recently received deepened
interest in the context of intrusion detection in the cybersecurity domain. Due
to the increasing amounts of data generated by monitoring tools as well as more
and more sophisticated attacks, these ML methods are gaining traction.
Knowledge graphs and their corresponding learning techniques such as Graph
Neural Networks (GNNs) with their ability to seamlessly integrate data from
multiple domains using human-understandable vocabularies, are finding
application in the cybersecurity domain. However, similar to other
connectionist models, GNNs are lacking transparency in their decision making.
This is especially important as there tend to be a high number of false
positive alerts in the cybersecurity domain, such that triage needs to be done
by domain experts, requiring a lot of man power. Therefore, we are addressing
Explainable AI (XAI) for GNNs to enhance trust management by exploring
combining symbolic and sub-symbolic methods in the area of cybersecurity that
incorporate domain knowledge. We experimented with this approach by generating
explanations in an industrial demonstrator system. The proposed method is shown
to produce intuitive explanations for alerts for a diverse range of scenarios.
Not only do the explanations provide deeper insights into the alerts, but they
also lead to a reduction of false positive alerts by 66% and by 93% when
including the fidelity metric.
Related papers
- CONTINUUM: Detecting APT Attacks through Spatial-Temporal Graph Neural Networks [0.9553673944187253]
Advanced Persistent Threats (APTs) represent a significant challenge in cybersecurity.
Traditional Intrusion Detection Systems (IDS) often fall short in detecting these multi-stage attacks.
arXiv Detail & Related papers (2025-01-06T12:43:59Z) - Identifying Backdoored Graphs in Graph Neural Network Training: An Explanation-Based Approach with Novel Metrics [13.93535590008316]
Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks.
We devised a novel detection method that creatively leverages graph-level explanations.
Our results show that our method can achieve high detection performance, marking a significant advancement in safeguarding GNNs against backdoor attacks.
arXiv Detail & Related papers (2024-03-26T22:41:41Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Interpreting GNN-based IDS Detections Using Provenance Graph Structural Features [15.256262257064982]
We introduce PROVEXPLAINER, a framework offering instance-level security-aware explanations using an interpretable surrogate model.
On malware and APT datasets, PROVEXPLAINER achieves up to 29%/27%/25% higher fidelity+, precision and recall, and 12% lower fidelity- respectively.
arXiv Detail & Related papers (2023-06-01T17:36:24Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Explaining Network Intrusion Detection System Using Explainable AI
Framework [0.5076419064097734]
Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
arXiv Detail & Related papers (2021-03-12T07:15:09Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.