OpenGU: A Comprehensive Benchmark for Graph Unlearning
- URL: http://arxiv.org/abs/2501.02728v1
- Date: Mon, 06 Jan 2025 02:57:32 GMT
- Title: OpenGU: A Comprehensive Benchmark for Graph Unlearning
- Authors: Bowen Fan, Yuming Ai, Xunkai Li, Zhilin Guo, Rong-Hua Li, Guoren Wang,
- Abstract summary: Graph Unlearning (GU) has emerged as a critical solution for privacy-sensitive applications.
We present OpenGU, the first GU benchmark, where 16 SOTA GU algorithms and 37 multi-domain datasets are integrated.
We draw $8$ crucial conclusions about existing GU methods, while also gaining valuable insights into their limitations.
- Score: 24.605943688948038
- License:
- Abstract: Graph Machine Learning is essential for understanding and analyzing relational data. However, privacy-sensitive applications demand the ability to efficiently remove sensitive information from trained graph neural networks (GNNs), avoiding the unnecessary time and space overhead caused by retraining models from scratch. To address this issue, Graph Unlearning (GU) has emerged as a critical solution, with the potential to support dynamic graph updates in data management systems and enable scalable unlearning in distributed data systems while ensuring privacy compliance. Unlike machine unlearning in computer vision or other fields, GU faces unique difficulties due to the non-Euclidean nature of graph data and the recursive message-passing mechanism of GNNs. Additionally, the diversity of downstream tasks and the complexity of unlearning requests further amplify these challenges. Despite the proliferation of diverse GU strategies, the absence of a benchmark providing fair comparisons for GU, and the limited flexibility in combining downstream tasks and unlearning requests, have yielded inconsistencies in evaluations, hindering the development of this domain. To fill this gap, we present OpenGU, the first GU benchmark, where 16 SOTA GU algorithms and 37 multi-domain datasets are integrated, enabling various downstream tasks with 13 GNN backbones when responding to flexible unlearning requests. Based on this unified benchmark framework, we are able to provide a comprehensive and fair evaluation for GU. Through extensive experimentation, we have drawn $8$ crucial conclusions about existing GU methods, while also gaining valuable insights into their limitations, shedding light on potential avenues for future research.
Related papers
- Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Federated Continual Graph Learning [7.464095716250756]
We present a pioneering study on Federated Continual Graph Learning (FCGL)
FCGL adapts to multiple evolving graphs within decentralized settings while adhering to storage and privacy constraints.
Our work begins with a comprehensive empirical analysis of FCGL, assessing its data characteristics, feasibility, and effectiveness.
arXiv Detail & Related papers (2024-11-28T05:15:47Z) - Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - xAI-Drop: Don't Use What You Cannot Explain [23.33477769275026]
Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data.
GNNs face challenges such as lack of generalization and poor interpretability.
We introduce xAI-Drop, a novel topological-level dropping regularizer.
arXiv Detail & Related papers (2024-07-29T14:53:45Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks [22.775933880072294]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Graph Neural Networks: Methods, Applications, and Opportunities [1.2183405753834562]
This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting.
The approaches for each learning task are analyzed from both theoretical as well as empirical standpoints.
Various applications and benchmark datasets are also provided, along with open challenges still plaguing the general applicability of GNNs.
arXiv Detail & Related papers (2021-08-24T13:46:19Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.