BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
- URL: http://arxiv.org/abs/2206.13983v1
- Date: Tue, 28 Jun 2022 13:08:28 GMT
- Title: BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
- Authors: Mandeep Rathee, Thorben Funke, Avishek Anand, Megha Khosla
- Abstract summary: Given a graph neural network model, several interpretability approaches exist to explain GNN models.
We propose a benchmark for evaluating the explainability approaches for GNNs called Bagel.
We conduct an extensive empirical study on four GNN models and nine post-hoc explanation approaches for node and graph classification tasks.
- Score: 4.43959863685757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of interpreting the decisions of machine learning is a
well-researched and important. We are interested in a specific type of machine
learning model that deals with graph data called graph neural networks.
Evaluating interpretability approaches for graph neural networks (GNN)
specifically are known to be challenging due to the lack of a commonly accepted
benchmark. Given a GNN model, several interpretability approaches exist to
explain GNN models with diverse (sometimes conflicting) evaluation
methodologies. In this paper, we propose a benchmark for evaluating the
explainability approaches for GNNs called Bagel. In Bagel, we firstly propose
four diverse GNN explanation evaluation regimes -- 1) faithfulness, 2)
sparsity, 3) correctness. and 4) plausibility. We reconcile multiple evaluation
metrics in the existing literature and cover diverse notions for a holistic
evaluation. Our graph datasets range from citation networks, document graphs,
to graphs from molecules and proteins. We conduct an extensive empirical study
on four GNN models and nine post-hoc explanation approaches for node and graph
classification tasks. We open both the benchmarks and reference implementations
and make them available at https://github.com/Mandeep-Rathee/Bagel-benchmark.
Related papers
- Graph Neural Networks on Discriminative Graphs of Words [19.817473565906777]
In this work, we explore a new Discriminative Graph of Words Graph Neural Network (DGoW-GNN) approach to classify text.
We propose a new model for the graph-based classification of text, which combines a GNN and a sequence model.
We evaluate our approach on seven benchmark datasets and find that it is outperformed by several state-of-the-art baseline models.
arXiv Detail & Related papers (2024-10-27T15:14:06Z) - The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Classifying Nodes in Graphs without GNNs [50.311528896010785]
We propose a fully GNN-free approach for node classification, not requiring them at train or test time.
Our method consists of three key components: smoothness constraints, pseudo-labeling iterations and neighborhood-label histograms.
arXiv Detail & Related papers (2024-02-08T18:59:30Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
Graph Neural Networks [25.94529851210956]
We propose a model-agnostic model-level explanation method for different Graph Neural Networks (GNNs) that follow the message passing scheme, GNNInterpreter.
GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect.
Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features.
arXiv Detail & Related papers (2022-09-15T07:45:35Z) - GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks [15.648750523827616]
We propose the first systematic evaluation framework for GNN explainability, considering explainability on three different "user needs"
For the inadequate but widely used synthetic benchmarks, surprisingly shallow techniques such as personalized PageRank have the best performance for a minimum computation time.
But when the graph structure is more complex and nodes have meaningful features, gradient-based methods are the best according to our evaluation criteria.
arXiv Detail & Related papers (2022-06-20T09:33:12Z) - Beyond Real-world Benchmark Datasets: An Empirical Study of Node
Classification with GNNs [3.547529079746247]
Graph Neural Networks (GNNs) have achieved great success on a node classification task.
Existing evaluation of GNNs lacks fine-grained analysis from various characteristics of graphs.
We conduct extensive experiments with a synthetic graph generator that can generate graphs having controlled characteristics for fine-grained analysis.
arXiv Detail & Related papers (2022-06-18T08:03:12Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.