BetaExplainer: A Probabilistic Method to Explain Graph Neural Networks
- URL: http://arxiv.org/abs/2412.11964v1
- Date: Mon, 16 Dec 2024 16:45:26 GMT
- Title: BetaExplainer: A Probabilistic Method to Explain Graph Neural Networks
- Authors: Whitney Sloneker, Shalin Patel, Michael Wang, Lorin Crawford, Ritambhara Singh,
- Abstract summary: Graph neural networks (GNNs) are powerful tools for conducting inference on graph data.
Many interpretable GNN methods exist, but they cannot quantify uncertainty in edge weights.
We proposed BetaExplainer which addresses these issues by using a sparsity-inducing prior to mask unimportant edges.
- Score: 1.798554018133928
- License:
- Abstract: Graph neural networks (GNNs) are powerful tools for conducting inference on graph data but are often seen as "black boxes" due to difficulty in extracting meaningful subnetworks driving predictive performance. Many interpretable GNN methods exist, but they cannot quantify uncertainty in edge weights and suffer in predictive accuracy when applied to challenging graph structures. In this work, we proposed BetaExplainer which addresses these issues by using a sparsity-inducing prior to mask unimportant edges during model training. To evaluate our approach, we examine various simulated data sets with diverse real-world characteristics. Not only does this implementation provide a notion of edge importance uncertainty, it also improves upon evaluation metrics for challenging datasets compared to state-of-the art explainer methods.
Related papers
- Graph Structure Learning with Interpretable Bayesian Neural Networks [10.957528713294874]
We introduce novel iterations with independently interpretable parameters.
These parameters influence characteristics of the estimated graph, such as edge sparsity.
After unrolling these iterations, prior knowledge over such graph characteristics shape prior distributions.
Fast execution and parameter efficiency allow for high-fidelity posterior approximation.
arXiv Detail & Related papers (2024-06-20T23:27:41Z) - GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations [21.997015999698732]
Diverse explainability methods of graph neural networks (GNN) have been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.
It is not clear yet how to evaluate the correctness of those explanations, whether it is from a human or a model perspective.
We propose GInX-Eval, an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness.
arXiv Detail & Related papers (2023-09-28T07:56:10Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Efficient Robustness Certificates for Discrete Data: Sparsity-Aware
Randomized Smoothing for Graphs, Images and More [85.52940587312256]
We propose a model-agnostic certificate based on the randomized smoothing framework which subsumes earlier work and is tight, efficient, and sparsity-aware.
We show the effectiveness of our approach on a wide variety of models, datasets, and tasks -- specifically highlighting its use for Graph Neural Networks.
arXiv Detail & Related papers (2020-08-29T10:09:02Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.