Globally Interpretable Graph Learning via Distribution Matching
- URL: http://arxiv.org/abs/2306.10447v2
- Date: Tue, 20 Feb 2024 20:21:19 GMT
- Title: Globally Interpretable Graph Learning via Distribution Matching
- Authors: Yi Nian, Yurui Chang, Wei Jin, Lu Lin
- Abstract summary: We aim to answer an important question that is not yet well studied: how to provide a global interpretation for the graph learning procedure?
We formulate this problem as globally interpretable graph learning, which targets on distilling high-level and human-intelligible patterns that dominate the learning procedure.
We propose a novel model fidelity metric, tailored for evaluating the fidelity of the resulting model trained on interpretations.
- Score: 12.885580925389352
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have emerged as a powerful model to capture
critical graph patterns. Instead of treating them as black boxes in an
end-to-end fashion, attempts are arising to explain the model behavior.
Existing works mainly focus on local interpretation to reveal the
discriminative pattern for each individual instance, which however cannot
directly reflect the high-level model behavior across instances. To gain global
insights, we aim to answer an important question that is not yet well studied:
how to provide a global interpretation for the graph learning procedure? We
formulate this problem as globally interpretable graph learning, which targets
on distilling high-level and human-intelligible patterns that dominate the
learning procedure, such that training on this pattern can recover a similar
model. As a start, we propose a novel model fidelity metric, tailored for
evaluating the fidelity of the resulting model trained on interpretations. Our
preliminary analysis shows that interpretative patterns generated by existing
global methods fail to recover the model training procedure. Thus, we further
propose our solution, Graph Distribution Matching (GDM), which synthesizes
interpretive graphs by matching the distribution of the original and
interpretive graphs in the GNN's feature space as its training proceeds, thus
capturing the most informative patterns the model learns during training.
Extensive experiments on graph classification datasets demonstrate multiple
advantages of the proposed method, including high model fidelity, predictive
accuracy and time efficiency, as well as the ability to reveal class-relevant
structure.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - DIVE: Subgraph Disagreement for Graph Out-of-Distribution Generalization [44.291382840373]
This paper addresses the challenge of out-of-distribution generalization in graph machine learning.
Traditional graph learning algorithms falter in real-world scenarios where this assumption fails.
A principal factor contributing to this suboptimal performance is the inherent simplicity bias of neural networks.
arXiv Detail & Related papers (2024-08-08T12:08:55Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Sub-Graph Learning for Spatiotemporal Forecasting via Knowledge
Distillation [22.434970343698676]
We present a new framework called KD-SGL to effectively learn the sub-graphs.
We define one global model to learn the overall structure of the graph and multiple local models for each sub-graph.
arXiv Detail & Related papers (2022-11-17T18:02:55Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Information-theoretic Evolution of Model Agnostic Global Explanations [10.921146104622972]
We present a novel model-agnostic approach that derives rules to globally explain the behavior of classification models trained on numerical and/or categorical data.
Our approach has been deployed in a leading digital marketing suite of products.
arXiv Detail & Related papers (2021-05-14T16:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.