Evaluating Link Prediction Explanations for Graph Neural Networks
- URL: http://arxiv.org/abs/2308.01682v1
- Date: Thu, 3 Aug 2023 10:48:37 GMT
- Title: Evaluating Link Prediction Explanations for Graph Neural Networks
- Authors: Claudio Borile, Alan Perotti, Andr\'e Panisson
- Abstract summary: We provide metrics to assess the quality of link prediction explanations, with or without ground-truth.
We discuss how underlying assumptions and technical details specific to the link prediction task, such as the choice of distance between node embeddings, can influence the quality of the explanations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Machine Learning (GML) has numerous applications, such as node/graph
classification and link prediction, in real-world domains. Providing
human-understandable explanations for GML models is a challenging yet
fundamental task to foster their adoption, but validating explanations for link
prediction models has received little attention. In this paper, we provide
quantitative metrics to assess the quality of link prediction explanations,
with or without ground-truth. State-of-the-art explainability methods for Graph
Neural Networks are evaluated using these metrics. We discuss how underlying
assumptions and technical details specific to the link prediction task, such as
the choice of distance between node embeddings, can influence the quality of
the explanations.
Related papers
- Semantic Interpretation and Validation of Graph Attention-based
Explanations for GNN Models [9.260186030255081]
We propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models.
Our work extends existing attention-based graph explainability methods by analysing the divergence in the attention distributions in relation to semantically sorted feature sets.
We apply our methodology on a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance.
arXiv Detail & Related papers (2023-08-08T12:34:32Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous
Link Prediction [37.57586847539004]
Transparency and accountability have become major concerns for black-box machine learning (ML) models.
We propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link) that generates explanations with connection interpretability.
We show that explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by 9 - 35% and are chosen by 78.79% of responses in human evaluation.
arXiv Detail & Related papers (2023-02-24T05:43:47Z) - A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics [8.795591344648294]
We focus on explainable graph neural networks and categorize them based on the use of explainable methods.
We provide the common performance metrics for GNNs explanations and point out several future research directions.
arXiv Detail & Related papers (2022-07-26T01:45:54Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - SEEN: Sharpening Explanations for Graph Neural Networks using
Explanations from Neighborhoods [0.0]
We propose a method to improve the explanation quality of node classification tasks through aggregation of auxiliary explanations.
Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques.
Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71%.
arXiv Detail & Related papers (2021-06-16T03:04:46Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.