Inductive Graph Unlearning
- URL: http://arxiv.org/abs/2304.03093v2
- Date: Fri, 7 Apr 2023 05:59:48 GMT
- Title: Inductive Graph Unlearning
- Authors: Cheng-Long Wang, Mengdi Huai, Di Wang
- Abstract summary: textitGraphEraser is designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training.
It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance.
We propose GUIDE, which consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation.
- Score: 23.051237635521108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a way to implement the "right to be forgotten" in machine learning,
\textit{machine unlearning} aims to completely remove the contributions and
information of the samples to be deleted from a trained model without affecting
the contributions of other samples. Recently, many frameworks for machine
unlearning have been proposed, and most of them focus on image and text data.
To extend machine unlearning to graph data, \textit{GraphEraser} has been
proposed. However, a critical issue is that \textit{GraphEraser} is
specifically designed for the transductive graph setting, where the graph is
static and attributes and edges of test nodes are visible during training. It
is unsuitable for the inductive setting, where the graph could be dynamic and
the test graph information is invisible in advance. Such inductive capability
is essential for production machine learning systems with evolving graphs like
social media and transaction networks. To fill this gap, we propose the
\underline{{\bf G}}\underline{{\bf U}}ided \underline{{\bf I}}n\underline{{\bf
D}}uctiv\underline{{\bf E}} Graph Unlearning framework (GUIDE). GUIDE consists
of three components: guided graph partitioning with fairness and balance,
efficient subgraph repair, and similarity-based aggregation. Empirically, we
evaluate our method on several inductive benchmarks and evolving transaction
graphs. Generally speaking, GUIDE can be efficiently implemented on the
inductive graph learning tasks for its low graph partition cost, no matter on
computation or structure information. The code will be available here:
https://github.com/Happy2Git/GUIDE.
Related papers
- Revisiting the Necessity of Graph Learning and Common Graph Benchmarks [2.1125997983972207]
Graph machine learning has enjoyed a meteoric rise in popularity since the introduction of deep learning in graph contexts.
The driving belief is that node features are insufficient for these tasks, so benchmark performance accurately reflects improvements in graph learning.
We show that surprisingly, node features are oftentimes more-than-sufficient for these tasks.
arXiv Detail & Related papers (2024-12-09T03:09:04Z) - Inductive Graph Alignment Prompt: Bridging the Gap between Graph
Pre-training and Inductive Fine-tuning From Spectral Perspective [13.277779426525056]
"Graph pre-training and fine-tuning" paradigm has significantly improved Graph Neural Networks(GNNs)
However, due to the immense gap of data and tasks between the pre-training and fine-tuning stages, the model performance is still limited.
We propose a novel graph prompt based method called Inductive Graph Alignment Prompt(IGAP)
arXiv Detail & Related papers (2024-02-21T06:25:54Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Synthetic Graph Generation to Benchmark Graph Learning [7.914804101579097]
Graph learning algorithms have attained state-of-the-art performance on many graph analysis tasks.
One reason is due to the very small number of datasets used in practice to benchmark the performance of graph learning algorithms.
We propose to generate synthetic graphs, and study the behaviour of graph learning algorithms in a controlled scenario.
arXiv Detail & Related papers (2022-04-04T10:48:32Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - Structural Information Preserving for Graph-to-Text Generation [59.00642847499138]
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.
We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information.
Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
arXiv Detail & Related papers (2021-02-12T20:09:01Z) - Graph topology inference benchmarks for machine learning [16.857405938139525]
We introduce several benchmarks specifically designed to reveal the relative merits and limitations of graph inference methods.
We also contrast some of the most prominent techniques in the literature.
arXiv Detail & Related papers (2020-07-16T09:40:32Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.