Graph-based Integrated Gradients for Explaining Graph Neural Networks
- URL: http://arxiv.org/abs/2509.07648v1
- Date: Tue, 09 Sep 2025 12:15:25 GMT
- Title: Graph-based Integrated Gradients for Explaining Graph Neural Networks
- Authors: Lachlan Simpson, Kyle Millar, Adriel Cheng, Cheng-Chew Lim, Hong Gunn Chew,
- Abstract summary: Integrated Gradients (IG) is a technique to address the black-box problem of neural networks.<n>We introduce graph-based integrated gradients (GB-IG); an extension of IG to graphs.<n>We demonstrate on four synthetic datasets that GB-IG accurately identifies crucial structural components of the graph used in classification tasks.
- Score: 2.814217189803191
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Integrated Gradients (IG) is a common explainability technique to address the black-box problem of neural networks. Integrated gradients assumes continuous data. Graphs are discrete structures making IG ill-suited to graphs. In this work, we introduce graph-based integrated gradients (GB-IG); an extension of IG to graphs. We demonstrate on four synthetic datasets that GB-IG accurately identifies crucial structural components of the graph used in classification tasks. We further demonstrate on three prevalent real-world graph datasets that GB-IG outperforms IG in highlighting important features for node classification tasks.
Related papers
- GED-Consistent Disentanglement of Aligned and Unaligned Substructures for Graph Similarity Learning [8.811956084670328]
We propose GCGSim, a GED-consistent graph similarity learning framework centering on graph-level matching and substructure-level edit costs.<n>Our experiments on four benchmark datasets show that GCGSim achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-11-25T02:07:30Z) - Improving Graph Embeddings in Machine Learning Using Knowledge Completion with Validation in a Case Study on COVID-19 Spread [1.0308647202215706]
Graph embeddings (GEs) map features from Knowledge Graphs (KGs) into vector spaces, enabling tasks like node classification and link prediction.<n>We propose a GML pipeline that integrates a Knowledge Completion phase to uncover latent dataset semantics before embedding generation.<n>Experiments show that our GML pipeline significantly alters the embedding space geometry, demonstrating that its introduction is not just a simple enrichment but a transformative step that redefines graph representation quality.
arXiv Detail & Related papers (2025-11-15T07:24:00Z) - Diffusion on Graph: Augmentation of Graph Structure for Node Classification [7.9233221247736205]
We propose on Graph Diffusion (DoG), which generates synthetic graph structures to boost the performance of graph neural networks (GNNs)<n>The synthetic graph structures generated by DoG are combined with the original graph to form an augmented graph for the training of node-level learning tasks.<n>To mitigate the adverse effect of the noise introduced by the synthetic graph structures, a low-rank regularization method is proposed.
arXiv Detail & Related papers (2025-03-16T16:39:25Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)<n>This framework provides a standardized setting to evaluate GNNs across diverse datasets.<n>We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Gradient Inversion Attack on Graph Neural Networks [11.075042582118963]
Malicious attackers can steal private image data from the exchange of neural networks during federated learning.<n>This paper studies the problem of whether private data can be reconstructed from leaked gradients in both node classification and graph classification tasks.<n>Two widely used GNN frameworks are analyzed, namely GCN and GraphSAGE.
arXiv Detail & Related papers (2024-11-29T02:42:17Z) - The GECo algorithm for Graph Neural Networks Explanation [0.0]
We introduce a new methodology involving graph communities to address the interpretability of graph classification problems.
The proposed method, called GECo, exploits the idea that if a community is a subset of graph nodes densely connected, this property should play a role in graph classification.
The obtained results outperform the other methods for artificial graph datasets and most real-world datasets.
arXiv Detail & Related papers (2024-11-18T09:08:30Z) - Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.<n>GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.<n>It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - Graph Distillation with Eigenbasis Matching [43.59076214528843]
We propose Graph Distillation with Eigenbasis Matching (GDEM) to replace the real large graph.
GDEM aligns the eigenbasis and node features of real and synthetic graphs.
It directly replicates the spectrum of the real graph and thus prevents the influence of GNNs.
arXiv Detail & Related papers (2023-10-13T15:48:12Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.