Unlink to Unlearn: Simplifying Edge Unlearning in GNNs
- URL: http://arxiv.org/abs/2402.10695v2
- Date: Mon, 11 Mar 2024 17:08:36 GMT
- Title: Unlink to Unlearn: Simplifying Edge Unlearning in GNNs
- Authors: Jiajun Tan, Fei Sun, Ruichen Qiu, Du Su, Huawei Shen
- Abstract summary: Unlearning in Graph Neural Networks (GNNs) has emerged as a prominent research frontier in academia.
Our research focuses on edge unlearning, a process of particular relevance to real-world applications.
We develop textbfUnlink to Unlearn, a novel method that facilitates unlearning exclusively through unlinking the forget edges from graph structure.
- Score: 24.987140675476464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As concerns over data privacy intensify, unlearning in Graph Neural Networks
(GNNs) has emerged as a prominent research frontier in academia. This concept
is pivotal in enforcing the \textit{right to be forgotten}, which entails the
selective removal of specific data from trained GNNs upon user request. Our
research focuses on edge unlearning, a process of particular relevance to
real-world applications. Current state-of-the-art approaches like GNNDelete can
eliminate the influence of specific edges yet suffer from
\textit{over-forgetting}, which means the unlearning process inadvertently
removes excessive information beyond needed, leading to a significant
performance decline for remaining edges. Our analysis identifies the loss
functions of GNNDelete as the primary source of over-forgetting and also
suggests that loss functions may be redundant for effective edge unlearning.
Building on these insights, we simplify GNNDelete to develop \textbf{Unlink to
Unlearn} (UtU), a novel method that facilitates unlearning exclusively through
unlinking the forget edges from graph structure. Our extensive experiments
demonstrate that UtU delivers privacy protection on par with that of a
retrained model while preserving high accuracy in downstream tasks, by
upholding over 97.3\% of the retrained model's privacy protection capabilities
and 99.8\% of its link prediction accuracy. Meanwhile, UtU requires only
constant computational demands, underscoring its advantage as a highly
lightweight and practical edge unlearning solution.
Related papers
- GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - What Can We Learn from Unlearnable Datasets? [107.12337511216228]
Unlearnable datasets have the potential to protect data privacy by preventing deep neural networks from generalizing.
It is widely believed that neural networks trained on unlearnable datasets only learn shortcuts, simpler rules that are not useful for generalization.
In contrast, we find that networks actually can learn useful features that can be reweighed for high test performance, suggesting that image protection is not assured.
arXiv Detail & Related papers (2023-05-30T17:41:35Z) - Sequential Graph Neural Networks for Source Code Vulnerability
Identification [5.582101184758527]
We present a properly curated C/C++ source code vulnerability dataset to aid in developing models.
We also propose a learning framework based on graph neural networks, denoted SEquential Graph Neural Network (SEGNN) for learning a large number of code semantic representations.
Our evaluations on two datasets and four baseline methods in a graph classification setting demonstrate state-of-the-art results.
arXiv Detail & Related papers (2023-05-23T17:25:51Z) - Efficiently Forgetting What You Have Learned in Graph Representation
Learning via Projection [19.57394670843742]
We study the unlearning problem in linear-GNNs, and then introduce its extension to non-linear structures.
Given a set of nodes to unlearn, we propose PROJECTOR that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten.
arXiv Detail & Related papers (2023-02-17T16:49:10Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.