Towards Effective and General Graph Unlearning via Mutual Evolution
- URL: http://arxiv.org/abs/2401.11760v1
- Date: Mon, 22 Jan 2024 08:45:29 GMT
- Title: Towards Effective and General Graph Unlearning via Mutual Evolution
- Authors: Xunkai Li, Yulin Zhao, Zhengyu Wu, Wentao Zhang, Rong-Hua Li, Guoren
Wang
- Abstract summary: We propose MEGU, a new mutual evolution paradigm that simultaneously evolves the predictive and unlearning capacities of graph unlearning.
In experiments on 9 graph benchmark datasets, MEGU achieves average performance improvements of 2.7%, 2.5%, and 3.2%.
MEGU exhibits satisfactory training efficiency, reducing time and space overhead by an average of 159.8x and 9.6x, respectively, in comparison to retraining GNN from scratch.
- Score: 44.11777886421429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid advancement of AI applications, the growing needs for data
privacy and model robustness have highlighted the importance of machine
unlearning, especially in thriving graph-based scenarios. However, most
existing graph unlearning strategies primarily rely on well-designed
architectures or manual process, rendering them less user-friendly and posing
challenges in terms of deployment efficiency. Furthermore, striking a balance
between unlearning performance and framework generalization is also a pivotal
concern. To address the above issues, we propose \underline{\textbf{M}}utual
\underline{\textbf{E}}volution \underline{\textbf{G}}raph
\underline{\textbf{U}}nlearning (MEGU), a new mutual evolution paradigm that
simultaneously evolves the predictive and unlearning capacities of graph
unlearning. By incorporating aforementioned two components, MEGU ensures
complementary optimization in a unified training framework that aligns with the
prediction and unlearning requirements. Extensive experiments on 9 graph
benchmark datasets demonstrate the superior performance of MEGU in addressing
unlearning requirements at the feature, node, and edge levels. Specifically,
MEGU achieves average performance improvements of 2.7\%, 2.5\%, and 3.2\%
across these three levels of unlearning tasks when compared to state-of-the-art
baselines. Furthermore, MEGU exhibits satisfactory training efficiency,
reducing time and space overhead by an average of 159.8x and 9.6x,
respectively, in comparison to retraining GNN from scratch.
Related papers
- Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning [17.85404473268992]
Graph unlearning aims to eliminate the influence of nodes, edges, or attributes from a trained Graph Neural Network (GNN)
Existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs.
We propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning.
arXiv Detail & Related papers (2024-09-25T07:20:59Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Unlearning Graph Classifiers with Limited Data Resources [39.29148804411811]
Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
arXiv Detail & Related papers (2022-11-06T20:46:50Z) - Dynamic Graph Representation Learning via Graph Transformer Networks [41.570839291138114]
We propose a Transformer-based dynamic graph learning method named Dynamic Graph Transformer (DGT)
DGT has spatial-temporal encoding to effectively learn graph topology and capture implicit links.
We show that DGT presents superior performance compared with several state-of-the-art baselines.
arXiv Detail & Related papers (2021-11-19T21:44:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.