Efficient GNN Explanation via Learning Removal-based Attribution
- URL: http://arxiv.org/abs/2306.05760v1
- Date: Fri, 9 Jun 2023 08:54:20 GMT
- Title: Efficient GNN Explanation via Learning Removal-based Attribution
- Authors: Yao Rong, Guanchu Wang, Qizhang Feng, Ninghao Liu, Zirui Liu,
Enkelejda Kasneci, Xia Hu
- Abstract summary: We propose a framework of GNN explanation named LeArn Removal-based Attribution (LARA) to address this problem.
The explainer in LARA learns to generate removal-based attribution which enables providing explanations with high fidelity.
In particular, LARA is 3.5 times faster and achieves higher fidelity than the state-of-the-art method on the large dataset ogbn-arxiv.
- Score: 56.18049062940675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Graph Neural Networks (GNNs) have been widely used in real-world
applications, model explanations are required not only by users but also by
legal regulations. However, simultaneously achieving high fidelity and low
computational costs in generating explanations has been a challenge for current
methods. In this work, we propose a framework of GNN explanation named LeArn
Removal-based Attribution (LARA) to address this problem. Specifically, we
introduce removal-based attribution and demonstrate its substantiated link to
interpretability fidelity theoretically and experimentally. The explainer in
LARA learns to generate removal-based attribution which enables providing
explanations with high fidelity. A strategy of subgraph sampling is designed in
LARA to improve the scalability of the training process. In the deployment,
LARA can efficiently generate the explanation through a feed-forward pass. We
benchmark our approach with other state-of-the-art GNN explanation methods on
six datasets. Results highlight the effectiveness of our framework regarding
both efficiency and fidelity. In particular, LARA is 3.5 times faster and
achieves higher fidelity than the state-of-the-art method on the large dataset
ogbn-arxiv (more than 160K nodes and 1M edges), showing its great potential in
real-world applications. Our source code is available at
https://anonymous.4open.science/r/LARA-10D8/README.md.
Related papers
- On the Feasibility of Fidelity$^-$ for Graph Pruning [8.237329883558857]
fidelity measures the output difference after removing unimportant parts of the input graph.
This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?"
We propose Fidelity$-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations.
arXiv Detail & Related papers (2024-06-17T13:05:00Z) - ExaRanker-Open: Synthetic Explanation for IR using Open-Source LLMs [60.81649785463651]
We introduce ExaRanker-Open, where we adapt and explore the use of open-source language models to generate explanations.
Our findings reveal that incorporating explanations consistently enhances neural rankers, with benefits escalating as the LLM size increases.
arXiv Detail & Related papers (2024-02-09T11:23:14Z) - Towards Training GNNs using Explanation Directed Message Passing [4.014524824655107]
We introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing)
We show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy.
Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs.
arXiv Detail & Related papers (2022-11-30T04:31:26Z) - Towards Formal Approximated Minimal Explanations of Neural Networks [0.0]
Deep neural networks (DNNs) are now being used in numerous domains.
DNNs are "black-boxes", and cannot be interpreted by humans.
We propose an efficient, verification-based method for finding minimal explanations.
arXiv Detail & Related papers (2022-10-25T11:06:37Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - Jointly Learnable Data Augmentations for Self-Supervised GNNs [0.311537581064266]
We propose GraphSurgeon, a novel self-supervised learning method for graph representation learning.
We take advantage of the flexibility of the learnable data augmentation and introduce a new strategy that augments in the embedding space.
Our finding shows that GraphSurgeon is comparable to six SOTA semi-supervised and on par with five SOTA self-supervised baselines in node classification tasks.
arXiv Detail & Related papers (2021-08-23T21:33:12Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.