GraphXAIN: Narratives to Explain Graph Neural Networks
- URL: http://arxiv.org/abs/2411.02540v3
- Date: Wed, 12 Feb 2025 15:14:01 GMT
- Title: GraphXAIN: Narratives to Explain Graph Neural Networks
- Authors: Mateusz Cedro, David Martens,
- Abstract summary: Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data.<n>Existing GNN explanation methods yield technical outputs that are difficult for non-data scientists to understand.<n>We propose GraphXAIN, a method that generates natural language narratives explaining GNN predictions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose challenges in interpretability. Existing GNN explanation methods usually yield technical outputs, such as subgraphs and feature importance scores, that are difficult for non-data scientists to understand and thereby violate the purpose of explanations. Motivated by recent Explainable AI (XAI) research, we propose GraphXAIN, a method that generates natural language narratives explaining GNN predictions. GraphXAIN is a model- and explainer-agnostic method that uses Large Language Models (LLMs) to translate explanatory subgraphs and feature importance scores into coherent, story-like explanations of GNN decision-making processes. Evaluations on real-world datasets demonstrate GraphXAIN's ability to improve graph explanations. A survey of machine learning researchers and practitioners reveals that GraphXAIN enhances four explainability dimensions: understandability, satisfaction, convincingness, and suitability for communicating model predictions. When combined with another graph explainer method, GraphXAIN further improves trustworthiness, insightfulness, confidence, and usability. Notably, 95% of participants found GraphXAIN to be a valuable addition to the GNN explanation method. By incorporating natural language narratives, our approach serves both graph practitioners and non-expert users by providing clearer and more effective explanations.
Related papers
- GIN-Graph: A Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks [0.40964539027092917]
We propose a new Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks (GIN-Graph) to generate reliable model-level explanation graphs.
GIN-Graph can be easily applied to GNN models trained on a variety of graph datasets to create meaningful explanation graphs.
arXiv Detail & Related papers (2025-03-08T22:39:36Z) - The GECo algorithm for Graph Neural Networks Explanation [0.0]
We introduce a new methodology involving graph communities to address the interpretability of graph classification problems.
The proposed method, called GECo, exploits the idea that if a community is a subset of graph nodes densely connected, this property should play a role in graph classification.
The obtained results outperform the other methods for artificial graph datasets and most real-world datasets.
arXiv Detail & Related papers (2024-11-18T09:08:30Z) - GraphNarrator: Generating Textual Explanations for Graph Neural Networks [14.367754016281934]
We present GraphNarrator, the first method designed to generate natural language explanations for Graph Neural Networks.<n>To address the lack of ground truth explanations to train the model, we propose first generating pseudo-labels that capture the model's decisions from saliency-based explanations.<n>The high-quality pseudo-labels are finally utilized to train an end-to-end explanation generator model.
arXiv Detail & Related papers (2024-10-20T03:55:46Z) - Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process [8.820909397907274]
We propose a verbalized graph representation learning (VGRL) method which is fully interpretable.
In contrast to traditional graph machine learning models, VGRL constrains this parameter space to be text description.
We conduct several studies to empirically evaluate the effectiveness of VGRL.
arXiv Detail & Related papers (2024-10-02T12:07:47Z) - When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding
and Reasoning [54.84870836443311]
The paper presents a new paradigm for understanding and reasoning about graph data by integrating image encoding and multimodal technologies.
This approach enables the comprehension of graph data through an instruction-response format, utilizing GPT-4V's advanced capabilities.
The study evaluates this paradigm on various graph types, highlighting the model's strengths and weaknesses, particularly in Chinese OCR performance and complex reasoning tasks.
arXiv Detail & Related papers (2023-12-16T08:14:11Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - On Explainability of Graph Neural Networks via Subgraph Explorations [48.56936527708657]
We propose a novel method, known as SubgraphX, to explain graph neural networks (GNNs)
Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly.
arXiv Detail & Related papers (2021-02-09T22:12:26Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks [45.824642013383944]
Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
arXiv Detail & Related papers (2020-01-17T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.