Forget and Explain: Transparent Verification of GNN Unlearning
- URL: http://arxiv.org/abs/2512.07450v1
- Date: Mon, 08 Dec 2025 11:25:19 GMT
- Title: Forget and Explain: Transparent Verification of GNN Unlearning
- Authors: Imran Ahsan, Hyunwook Yu, Jinsung Kim, Mucheol Kim,
- Abstract summary: We propose an explainability-driven verifier for GNN unlearning that snapshots the model before and after deletion.<n>We evaluate two backbones (GCN, GAT) and four unlearning strategies (Retrain, GraphEditor, GNNDelete, IDEA) across five benchmarks.<n>Results show that Retrain and GNNDelete achieve near-complete forgetting, GraphEditor provides partial erasure, and IDEA leaves residual signals.
- Score: 3.0152570264250347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are increasingly used to model complex patterns in graph-structured data. However, enabling them to "forget" designated information remains challenging, especially under privacy regulations such as the GDPR. Existing unlearning methods largely optimize for efficiency and scalability, yet they offer little transparency, and the black-box nature of GNNs makes it difficult to verify whether forgetting has truly occurred. We propose an explainability-driven verifier for GNN unlearning that snapshots the model before and after deletion, using attribution shifts and localized structural changes (for example, graph edit distance) as transparent evidence. The verifier uses five explainability metrics: residual attribution, heatmap shift, explainability score deviation, graph edit distance, and a diagnostic graph rule shift. We evaluate two backbones (GCN, GAT) and four unlearning strategies (Retrain, GraphEditor, GNNDelete, IDEA) across five benchmarks (Cora, Citeseer, Pubmed, Coauthor-CS, Coauthor-Physics). Results show that Retrain and GNNDelete achieve near-complete forgetting, GraphEditor provides partial erasure, and IDEA leaves residual signals. These explanation deltas provide the primary, human-readable evidence of forgetting; we also report membership-inference ROC-AUC as a complementary, graph-wide privacy signal.
Related papers
- How does Graph Structure Modulate Membership-Inference Risk for Graph Neural Networks? [0.34546020643989767]
Graph neural networks (GNNs) have become the standard tool for encoding data and their complex relationships into continuous representations.<n>Their use in sensitive applications has raised concerns about the potential leakage of training data.<n>Research on privacy leakage in GNNs has largely been shaped by findings from non-graph domains.
arXiv Detail & Related papers (2026-01-23T19:08:36Z) - Enhancing Node-Level Graph Domain Adaptation by Alleviating Local Dependency [8.229138664380324]
Transfering knowledge effectively from one graph to another remains a critical challenge.<n>In this paper, we show that conditional shift can be observed only if there exists local dependencies among node features.<n>We propose to improve GDA by decorrelating node features, which can be specifically implemented through decorrelated GCN layers and graph transformer layers.
arXiv Detail & Related papers (2025-12-15T10:00:25Z) - GraphTOP: Graph Topology-Oriented Prompting for Graph Neural Networks [66.07512871031163]
"Pre-training, adaptation" scheme pre-trains powerful Graph Neural Networks (GNNs) over unlabeled graph data.<n>In the adaptation phase, graph prompting modifies input graph data with learnable prompts while keeping pre-trained GNN models frozen.<n>We propose the first **Graph** **T**opology-**O**riented **P**rompting (GraphTOP) framework to effectively adapt pre-trained GNN models for downstream tasks.
arXiv Detail & Related papers (2025-10-25T22:50:12Z) - GraphBridge: Towards Arbitrary Transfer Learning in GNNs [65.01790632978962]
GraphBridge is a novel framework to enable knowledge transfer across disparate tasks and domains in GNNs.<n>It allows for the augmentation of any pre-trained GNN with prediction heads and a bridging network that connects the input to the output layer.<n> Empirical validation, conducted over 16 datasets representative of these scenarios, confirms the framework's capacity for task- and domain-agnostic transfer learning.
arXiv Detail & Related papers (2025-02-26T15:57:51Z) - Community-Centric Graph Unlearning [10.906555492206959]
We propose a novel Graph Structure Mapping Unlearning paradigm (GSMU) and a novel method based on it named Community-centric Graph Eraser (CGE)<n>CGE maps community subgraphs to nodes, thereby enabling the reconstruction of a node-level unlearning operation within a reduced mapped graph.
arXiv Detail & Related papers (2024-08-19T05:37:35Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.