Unlearning Graph Classifiers with Limited Data Resources
- URL: http://arxiv.org/abs/2211.03216v2
- Date: Sat, 1 Jul 2023 04:05:49 GMT
- Title: Unlearning Graph Classifiers with Limited Data Resources
- Authors: Chao Pan, Eli Chien, Olgica Milenkovic
- Abstract summary: Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
- Score: 39.29148804411811
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As the demand for user privacy grows, controlled data removal (machine
unlearning) is becoming an important feature of machine learning models for
data-sensitive Web applications such as social networks and recommender
systems. Nevertheless, at this point it is still largely unknown how to perform
efficient machine unlearning of graph neural networks (GNNs); this is
especially the case when the number of training samples is small, in which case
unlearning can seriously compromise the performance of the model. To address
this issue, we initiate the study of unlearning the Graph Scattering Transform
(GST), a mathematical framework that is efficient, provably stable under
feature or graph topology perturbations, and offers graph classification
performance comparable to that of GNNs. Our main contribution is the first
known nonlinear approximate graph unlearning method based on GSTs. Our second
contribution is a theoretical analysis of the computational complexity of the
proposed unlearning mechanism, which is hard to replicate for deep neural
networks. Our third contribution are extensive simulation results which show
that, compared to complete retraining of GNNs after each removal request, the
new GST-based approach offers, on average, a 10.38x speed-up and leads to a
2.6% increase in test accuracy during unlearning of 90 out of 100 training
graphs from the IMDB dataset (10% training ratio). Our implementation is
available online at https://doi.org/10.5281/zenodo.7613150.
Related papers
- Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Loss-aware Curriculum Learning for Heterogeneous Graph Neural Networks [30.333265803394998]
This paper investigates the application of curriculum learning techniques to improve the performance of Heterogeneous Graph Neural Networks (GNNs)
To better classify the quality of the data, we design a loss-aware training schedule, named LTS, that measures the quality of every nodes of the data.
Our findings demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities for analyzing complex graph-structured data.
arXiv Detail & Related papers (2024-02-29T05:44:41Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Certified Graph Unlearning [39.29148804411811]
Graph-structured data is ubiquitous in practice and often processed using graph neural networks (GNNs)
We introduce the first known framework for emph certified graph unlearning of GNNs.
Three different types of unlearning requests need to be considered, including node feature, edge and node unlearning.
arXiv Detail & Related papers (2022-06-18T07:41:10Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Neural Network Training with Data Tiering [16.02267628659034]
Graph Neural Networks (GNNs) have shown success in learning from graph-structured data, with applications to fraud detection, recommendation, and knowledge graph reasoning.
However, training GNN efficiently is challenging because 1) GPU memory capacity is limited and can be insufficient for large datasets, and 2) the graph-based data structure causes irregular data access patterns.
In this work, we provide a method to statistical analyze and identify more frequently accessed data ahead of GNN training.
arXiv Detail & Related papers (2021-11-10T19:35:10Z) - Training Graph Neural Networks by Graphon Estimation [2.5997274006052544]
We propose to train a graph neural network via resampling from a graphon estimate obtained from the underlying network data.
We show that our approach is competitive with and in many cases outperform the other over-smoothing reducing GNN training methods.
arXiv Detail & Related papers (2021-09-04T19:21:48Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Scalable Graph Neural Network Training: The Case for Sampling [4.9201378771958675]
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
arXiv Detail & Related papers (2021-05-05T20:44:10Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.