GIF: A General Graph Unlearning Strategy via Influence Function
- URL: http://arxiv.org/abs/2304.02835v1
- Date: Thu, 6 Apr 2023 03:02:54 GMT
- Title: GIF: A General Graph Unlearning Strategy via Influence Function
- Authors: Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, Xiangnan He
- Abstract summary: Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
- Score: 63.52038638220563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the greater emphasis on privacy and security in our society, the problem
of graph unlearning -- revoking the influence of specific data on the trained
GNN model, is drawing increasing attention. However, ranging from machine
unlearning to recently emerged graph unlearning methods, existing efforts
either resort to retraining paradigm, or perform approximate erasure that fails
to consider the inter-dependency between connected neighbors or imposes
constraints on GNN structure, therefore hard to achieve satisfying
performance-complexity trade-offs.
In this work, we explore the influence function tailored for graph
unlearning, so as to improve the unlearning efficacy and efficiency for graph
unlearning. We first present a unified problem formulation of diverse graph
unlearning tasks \wrt node, edge, and feature. Then, we recognize the crux to
the inability of traditional influence function for graph unlearning, and
devise Graph Influence Function (GIF), a model-agnostic unlearning method that
can efficiently and accurately estimate parameter changes in response to a
$\epsilon$-mass perturbation in deleted data. The idea is to supplement the
objective of the traditional influence function with an additional loss term of
the influenced neighbors due to the structural dependency. Further deductions
on the closed-form solution of parameter changes provide a better understanding
of the unlearning mechanism. We conduct extensive experiments on four
representative GNN models and three benchmark datasets to justify the
superiority of GIF for diverse graph unlearning tasks in terms of unlearning
efficacy, model utility, and unlearning efficiency. Our implementations are
available at \url{https://github.com/wujcan/GIF-torch/}.
Related papers
- Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning [17.85404473268992]
Graph unlearning aims to eliminate the influence of nodes, edges, or attributes from a trained Graph Neural Network (GNN)
Existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs.
We propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning.
arXiv Detail & Related papers (2024-09-25T07:20:59Z) - Community-Centric Graph Unlearning [10.906555492206959]
We propose a novel Graph Structure Mapping Unlearning paradigm (GSMU) and a novel method based on it named Community-centric Graph Eraser (CGE)
CGE maps community subgraphs to nodes, thereby enabling the reconstruction of a node-level unlearning operation within a reduced mapped graph.
arXiv Detail & Related papers (2024-08-19T05:37:35Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Characterizing the Influence of Graph Elements [24.241010101383505]
The influence function of graph convolution networks (GCNs) can shed light on the effects of removing training nodes/edges from an input graph.
We show that the influence function of an SGC model could be used to estimate the impact of removing training nodes/edges on the test performance of the SGC without re-training the model.
arXiv Detail & Related papers (2022-10-14T01:04:28Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Neural Stochastic Block Model & Scalable Community-Based Graph Learning [8.00785050036369]
This paper proposes a scalable community-based neural framework for graph learning.
The framework learns the graph topology through the task of community detection and link prediction.
We look into two particular applications, the graph alignment and the anomalous correlation detection.
arXiv Detail & Related papers (2020-05-16T03:28:50Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.