Interpreting GNN-based IDS Detections Using Provenance Graph Structural
Features
- URL: http://arxiv.org/abs/2306.00934v2
- Date: Tue, 6 Jun 2023 22:42:53 GMT
- Title: Interpreting GNN-based IDS Detections Using Provenance Graph Structural
Features
- Authors: Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, Muhyun Kim, Feng
Chen, Murat Kantarcioglu and Kangkook Jee
- Abstract summary: We propose PROVEXPLAINER, a framework for projecting abstract GNN decision boundaries onto interpretable feature spaces.
We first replicate the decision-making process of GNNbased security models using simpler and explainable models such as Decision Trees (DTs)
Our graph structural features are closely tied to problem-space actions in the system provenance domain, which allows the detection results to be explained in descriptive, human language.
- Score: 15.138765307403874
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The black-box nature of complex Neural Network (NN)-based models has hindered
their widespread adoption in security domains due to the lack of logical
explanations and actionable follow-ups for their predictions. To enhance the
transparency and accountability of Graph Neural Network (GNN) security models
used in system provenance analysis, we propose PROVEXPLAINER, a framework for
projecting abstract GNN decision boundaries onto interpretable feature spaces.
We first replicate the decision-making process of GNNbased security models
using simpler and explainable models such as Decision Trees (DTs). To maximize
the accuracy and fidelity of the surrogate models, we propose novel graph
structural features founded on classical graph theory and enhanced by extensive
data study with security domain knowledge. Our graph structural features are
closely tied to problem-space actions in the system provenance domain, which
allows the detection results to be explained in descriptive, human language.
PROVEXPLAINER allowed simple DT models to achieve 95% fidelity to the GNN on
program classification tasks with general graph structural features, and 99%
fidelity on malware detection tasks with a task-specific feature package
tailored for direct interpretation. The explanations for malware classification
are demonstrated with case studies of five real-world malware samples across
three malware families.
Related papers
- Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning and backdoor attacks targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - Kolmogorov-Arnold Graph Neural Networks [2.4005219869876453]
Graph neural networks (GNNs) excel in learning from network-like data but often lack interpretability.
We propose the Graph Kolmogorov-Arnold Network (GKAN) to enhance both accuracy and interpretability.
arXiv Detail & Related papers (2024-06-26T13:54:59Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Anomal-E: A Self-Supervised Network Intrusion Detection System based on
Graph Neural Networks [0.0]
This paper investigates Graph Neural Networks (GNNs) application for self-supervised network intrusion and anomaly detection.
GNNs are a deep learning approach for graph-based data that incorporate graph structures into learning.
We present Anomal-E, a GNN approach to intrusion and anomaly detection that leverages edge features and graph topological structure in a self-supervised process.
arXiv Detail & Related papers (2022-07-14T10:59:39Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.