Globally Interpretable Graph Learning via Distribution Matching
- URL: http://arxiv.org/abs/2306.10447v2
- Date: Tue, 20 Feb 2024 20:21:19 GMT
- Title: Globally Interpretable Graph Learning via Distribution Matching
- Authors: Yi Nian, Yurui Chang, Wei Jin, Lu Lin
- Abstract summary: We aim to answer an important question that is not yet well studied: how to provide a global interpretation for the graph learning procedure?
We formulate this problem as globally interpretable graph learning, which targets on distilling high-level and human-intelligible patterns that dominate the learning procedure.
We propose a novel model fidelity metric, tailored for evaluating the fidelity of the resulting model trained on interpretations.
- Score: 12.885580925389352
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have emerged as a powerful model to capture
critical graph patterns. Instead of treating them as black boxes in an
end-to-end fashion, attempts are arising to explain the model behavior.
Existing works mainly focus on local interpretation to reveal the
discriminative pattern for each individual instance, which however cannot
directly reflect the high-level model behavior across instances. To gain global
insights, we aim to answer an important question that is not yet well studied:
how to provide a global interpretation for the graph learning procedure? We
formulate this problem as globally interpretable graph learning, which targets
on distilling high-level and human-intelligible patterns that dominate the
learning procedure, such that training on this pattern can recover a similar
model. As a start, we propose a novel model fidelity metric, tailored for
evaluating the fidelity of the resulting model trained on interpretations. Our
preliminary analysis shows that interpretative patterns generated by existing
global methods fail to recover the model training procedure. Thus, we further
propose our solution, Graph Distribution Matching (GDM), which synthesizes
interpretive graphs by matching the distribution of the original and
interpretive graphs in the GNN's feature space as its training proceeds, thus
capturing the most informative patterns the model learns during training.
Extensive experiments on graph classification datasets demonstrate multiple
advantages of the proposed method, including high model fidelity, predictive
accuracy and time efficiency, as well as the ability to reveal class-relevant
structure.
Related papers
- PAC Learnability under Explanation-Preserving Graph Perturbations [15.83659369727204]
Graph neural networks (GNNs) operate over graphs, enabling the model to leverage the complex relationships and dependencies in graph-structured data.
A graph explanation is a subgraph which is an almost sufficient' statistic of the input graph with respect to its classification label.
This work considers two methods for leveraging such perturbation invariances in the design and training of GNNs.
arXiv Detail & Related papers (2024-02-07T17:23:15Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Sub-Graph Learning for Spatiotemporal Forecasting via Knowledge
Distillation [22.434970343698676]
We present a new framework called KD-SGL to effectively learn the sub-graphs.
We define one global model to learn the overall structure of the graph and multiple local models for each sub-graph.
arXiv Detail & Related papers (2022-11-17T18:02:55Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Information-theoretic Evolution of Model Agnostic Global Explanations [10.921146104622972]
We present a novel model-agnostic approach that derives rules to globally explain the behavior of classification models trained on numerical and/or categorical data.
Our approach has been deployed in a leading digital marketing suite of products.
arXiv Detail & Related papers (2021-05-14T16:52:16Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.