Towards Training GNNs using Explanation Directed Message Passing
- URL: http://arxiv.org/abs/2211.16731v2
- Date: Thu, 1 Dec 2022 08:20:04 GMT
- Title: Towards Training GNNs using Explanation Directed Message Passing
- Authors: Valentina Giunchiglia, Chirag Varun Shukla, Guadalupe Gonzalez, Chirag
Agarwal
- Abstract summary: We introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing)
We show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy.
Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs.
- Score: 4.014524824655107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing use of Graph Neural Networks (GNNs) in critical
real-world applications, several post hoc explanation methods have been
proposed to understand their predictions. However, there has been no work in
generating explanations on the fly during model training and utilizing them to
improve the expressive power of the underlying GNN models. In this work, we
introduce a novel explanation-directed neural message passing framework for
GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings
from nodes and edges identified as important by a GNN explanation method.
EXPASS can be used with any existing GNN architecture and subgraph-optimizing
explainer to learn accurate graph embeddings. We theoretically show that EXPASS
alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of
Dirichlet energy and that the embedding difference between the vanilla message
passing and EXPASS framework can be upper bounded by the difference of their
respective model weights. Our empirical results show that graph embeddings
learned using EXPASS improve the predictive performance and alleviate the
oversmoothing problems of GNNs, opening up new frontiers in graph machine
learning to develop explanation-based training frameworks.
Related papers
- Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - ACGAN-GNNExplainer: Auxiliary Conditional Generative Explainer for Graph
Neural Networks [7.077341403454516]
Graph neural networks (GNNs) have proven their efficacy in a variety of real-world applications, but their underlying mechanisms remain a mystery.
To address this challenge and enable reliable decision-making, many GNN explainers have been proposed in recent years.
We introduce Auxiliary Generative Adrative Network (ACGAN) into the field of GNN explanation and propose a new GNN explainer dubbedemphACGANGNNExplainer.
arXiv Detail & Related papers (2023-09-29T01:20:28Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - EEGNN: Edge Enhanced Graph Neural Networks [1.0246596695310175]
We propose a new explanation for such deteriorated performance phenomenon, mis-simplification.
We show that such simplifying can reduce the potential of message-passing layers to capture the structural information of graphs.
EEGNN uses the structural information extracted from the proposed Dirichlet mixture Poisson graph model to improve the performance of various deep message-passing GNNs.
arXiv Detail & Related papers (2022-08-12T15:24:55Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.