IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks
- URL: http://arxiv.org/abs/2407.19398v1
- Date: Sun, 28 Jul 2024 04:59:59 GMT
- Title: IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks
- Authors: Yushun Dong, Binchi Zhang, Zhenyu Lei, Na Zou, Jundong Li,
- Abstract summary: Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications.
Privacy leakage may happen when the trained GNNs are deployed and exposed to potential attackers.
We propose a principled framework named IDEA to achieve flexible and certified unlearning for GNNs.
- Score: 68.6374698896505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications. However, the graph data used for training may contain sensitive personal information of the involved individuals. Once trained, GNNs typically encode such information in their learnable parameters. As a consequence, privacy leakage may happen when the trained GNNs are deployed and exposed to potential attackers. Facing such a threat, machine unlearning for GNNs has become an emerging technique that aims to remove certain personal information from a trained GNN. Among these techniques, certified unlearning stands out, as it provides a solid theoretical guarantee of the information removal effectiveness. Nevertheless, most of the existing certified unlearning methods for GNNs are only designed to handle node and edge unlearning requests. Meanwhile, these approaches are usually tailored for either a specific design of GNN or a specially designed training objective. These disadvantages significantly jeopardize their flexibility. In this paper, we propose a principled framework named IDEA to achieve flexible and certified unlearning for GNNs. Specifically, we first instantiate four types of unlearning requests on graphs, and then we propose an approximation approach to flexibly handle these unlearning requests over diverse GNNs. We further provide theoretical guarantee of the effectiveness for the proposed approach as a certification. Different from existing alternatives, IDEA is not designed for any specific GNNs or optimization objectives to perform certified unlearning, and thus can be easily generalized. Extensive experiments on real-world datasets demonstrate the superiority of IDEA in multiple key perspectives.
Related papers
- Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - On Strengthening and Defending Graph Reconstruction Attack with Markov
Chain Approximation [40.21760151203987]
We study the first comprehensive study of graph reconstruction attack that aims to reconstruct the adjacency of nodes.
We show that a range of factors in GNNs can lead to the surprising leakage of private links.
We propose two information theory-guided mechanisms: (1) the chain-based attack method with adaptive designs for extracting more private information; (2) the chain-based defense method that sharply reduces the attack fidelity with moderate accuracy loss.
arXiv Detail & Related papers (2023-06-15T13:00:56Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - An Adaptive Graph Pre-training Framework for Localized Collaborative
Filtering [79.17319280791237]
We propose an adaptive graph pre-training framework for localized collaborative filtering (ADAPT)
ADAPT captures both the common knowledge across different graphs and the uniqueness for each graph.
It does not require transferring user/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph.
arXiv Detail & Related papers (2021-12-14T06:53:13Z) - IGNNITION: Bridging the Gap Between Graph Neural Networks and Networking
Systems [4.1591055164123665]
We present IGNNITION, a novel open-source framework that enables fast prototyping of Graph Neural Networks (GNNs) for networking systems.
IGNNITION is based on an intuitive high-level abstraction that hides the complexity behind GNNs.
Our results show that the GNN models produced by IGNNITION are equivalent in terms of accuracy and performance to their native implementations.
arXiv Detail & Related papers (2021-09-14T14:28:21Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - Graph-Free Knowledge Distillation for Graph Neural Networks [30.38128029453977]
We propose the first dedicated approach to distilling knowledge from a graph neural network without graph data.
The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution.
We provide the strategies for handling different types of prior knowledge in the graph data or the GNNs.
arXiv Detail & Related papers (2021-05-16T21:38:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.