On the Feasibility of Fidelity$^-$ for Graph Pruning
- URL: http://arxiv.org/abs/2406.11504v1
- Date: Mon, 17 Jun 2024 13:05:00 GMT
- Title: On the Feasibility of Fidelity$^-$ for Graph Pruning
- Authors: Yong-Min Shin, Won-Yong Shin,
- Abstract summary: fidelity measures the output difference after removing unimportant parts of the input graph.
This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?"
We propose Fidelity$-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations.
- Score: 8.237329883558857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance.
Related papers
- Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations [21.997015999698732]
Diverse explainability methods of graph neural networks (GNN) have been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.
It is not clear yet how to evaluate the correctness of those explanations, whether it is from a human or a model perspective.
We propose GInX-Eval, an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness.
arXiv Detail & Related papers (2023-09-28T07:56:10Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks [30.19635147123557]
We propose a generative structure -- GFlowNets-based GNN Explainer (GFlowExplainer)
Our GFlowExplainer aims to learn a policy that generates a distribution of subgraphs for which the probability of a subgraph is proportional to its' reward.
We conduct extensive experiments on both synthetic and real datasets, and both qualitative and quantitative results show the superiority of our GFlowExplainer.
arXiv Detail & Related papers (2023-03-04T16:15:25Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Interpretations Steered Network Pruning via Amortized Inferred Saliency
Maps [85.49020931411825]
Convolutional Neural Networks (CNNs) compression is crucial to deploying these models in edge devices with limited resources.
We propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process.
We tackle this challenge by introducing a selector model that predicts real-time smooth saliency masks for pruned models.
arXiv Detail & Related papers (2022-09-07T01:12:11Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Robust Counterfactual Explanations on Graph Neural Networks [42.91881080506145]
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise.
Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction.
We propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs.
arXiv Detail & Related papers (2021-07-08T19:50:00Z) - Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks [6.004582130591279]
We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the GNN model and the explanation to be restrictive.
Specifically, existing approaches do not enforce explanations to be predictive, sparse, or robust to input perturbations.
We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple procedure to optimize for fidelity.
arXiv Detail & Related papers (2021-05-18T15:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.