Interpreting GNN-based IDS Detections Using Provenance Graph Structural
Features
- URL: http://arxiv.org/abs/2306.00934v2
- Date: Tue, 6 Jun 2023 22:42:53 GMT
- Title: Interpreting GNN-based IDS Detections Using Provenance Graph Structural
Features
- Authors: Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, Muhyun Kim, Feng
Chen, Murat Kantarcioglu and Kangkook Jee
- Abstract summary: We propose PROVEXPLAINER, a framework for projecting abstract GNN decision boundaries onto interpretable feature spaces.
We first replicate the decision-making process of GNNbased security models using simpler and explainable models such as Decision Trees (DTs)
Our graph structural features are closely tied to problem-space actions in the system provenance domain, which allows the detection results to be explained in descriptive, human language.
- Score: 15.138765307403874
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The black-box nature of complex Neural Network (NN)-based models has hindered
their widespread adoption in security domains due to the lack of logical
explanations and actionable follow-ups for their predictions. To enhance the
transparency and accountability of Graph Neural Network (GNN) security models
used in system provenance analysis, we propose PROVEXPLAINER, a framework for
projecting abstract GNN decision boundaries onto interpretable feature spaces.
We first replicate the decision-making process of GNNbased security models
using simpler and explainable models such as Decision Trees (DTs). To maximize
the accuracy and fidelity of the surrogate models, we propose novel graph
structural features founded on classical graph theory and enhanced by extensive
data study with security domain knowledge. Our graph structural features are
closely tied to problem-space actions in the system provenance domain, which
allows the detection results to be explained in descriptive, human language.
PROVEXPLAINER allowed simple DT models to achieve 95% fidelity to the GNN on
program classification tasks with general graph structural features, and 99%
fidelity on malware detection tasks with a task-specific feature package
tailored for direct interpretation. The explanations for malware classification
are demonstrated with case studies of five real-world malware samples across
three malware families.
Related papers
- How Explanations Leak the Decision Logic: Stealing Graph Neural Networks via Explanation Alignment [9.329315232799814]
Graph Neural Networks (GNNs) have become essential tools for analyzing graph-structured data in domains such as drug discovery and financial analysis.<n>Recent advances in explainable GNNs have addressed this need by revealing important subgraphs that influence predictions.<n>This paper investigates how such explanations potentially leak critical decision logic that can be exploited for model stealing.
arXiv Detail & Related papers (2025-06-03T17:11:05Z) - SE-SGformer: A Self-Explainable Signed Graph Transformer for Link Sign Prediction [8.820909397907274]
Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist.
SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions.
We introduce a Self-Explainable Signed Graph transformer (SE-SGformer) framework, which outputs explainable information while ensuring high prediction accuracy.
arXiv Detail & Related papers (2024-08-16T13:54:50Z) - Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation [41.831831628421675]
Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection.
We propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection.
arXiv Detail & Related papers (2024-04-24T06:52:53Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Explainable Spatio-Temporal Graph Neural Networks [16.313146933922752]
We propose an Explainable Spatio-Temporal Graph Neural Networks (STGNN) framework that enhances STGNNs with inherent explainability.
Our framework integrates a unified-temporal graph attention network with a positional information fusion layer as the STG encoder and decoder.
We demonstrate that STExplainer outperforms state-of-the-art baselines in terms of predictive accuracy and explainability metrics.
arXiv Detail & Related papers (2023-10-26T04:47:28Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.