Efficiently Forgetting What You Have Learned in Graph Representation
Learning via Projection
- URL: http://arxiv.org/abs/2302.08990v1
- Date: Fri, 17 Feb 2023 16:49:10 GMT
- Title: Efficiently Forgetting What You Have Learned in Graph Representation
Learning via Projection
- Authors: Weilin Cong, Mehrdad Mahdavi
- Abstract summary: We study the unlearning problem in linear-GNNs, and then introduce its extension to non-linear structures.
Given a set of nodes to unlearn, we propose PROJECTOR that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten.
- Score: 19.57394670843742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As privacy protection receives much attention, unlearning the effect of a
specific node from a pre-trained graph learning model has become equally
important. However, due to the node dependency in the graph-structured data,
representation unlearning in Graph Neural Networks (GNNs) is challenging and
less well explored. In this paper, we fill in this gap by first studying the
unlearning problem in linear-GNNs, and then introducing its extension to
non-linear structures. Given a set of nodes to unlearn, we propose PROJECTOR
that unlearns by projecting the weight parameters of the pre-trained model onto
a subspace that is irrelevant to features of the nodes to be forgotten.
PROJECTOR could overcome the challenges caused by node dependency and enjoys a
perfect data removal, i.e., the unlearned model parameters do not contain any
information about the unlearned node features which is guaranteed by
algorithmic construction. Empirical results on real-world datasets illustrate
the effectiveness and efficiency of PROJECTOR.
Related papers
- Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning [17.85404473268992]
Graph unlearning aims to eliminate the influence of nodes, edges, or attributes from a trained Graph Neural Network (GNN)
Existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs.
We propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning.
arXiv Detail & Related papers (2024-09-25T07:20:59Z) - Community-Centric Graph Unlearning [10.906555492206959]
We propose a novel Graph Structure Mapping Unlearning paradigm (GSMU) and a novel method based on it named Community-centric Graph Eraser (CGE)
CGE maps community subgraphs to nodes, thereby enabling the reconstruction of a node-level unlearning operation within a reduced mapped graph.
arXiv Detail & Related papers (2024-08-19T05:37:35Z) - Loss-aware Curriculum Learning for Heterogeneous Graph Neural Networks [30.333265803394998]
This paper investigates the application of curriculum learning techniques to improve the performance of Heterogeneous Graph Neural Networks (GNNs)
To better classify the quality of the data, we design a loss-aware training schedule, named LTS, that measures the quality of every nodes of the data.
Our findings demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities for analyzing complex graph-structured data.
arXiv Detail & Related papers (2024-02-29T05:44:41Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Efficient Robustness Certificates for Discrete Data: Sparsity-Aware
Randomized Smoothing for Graphs, Images and More [85.52940587312256]
We propose a model-agnostic certificate based on the randomized smoothing framework which subsumes earlier work and is tight, efficient, and sparsity-aware.
We show the effectiveness of our approach on a wide variety of models, datasets, and tasks -- specifically highlighting its use for Graph Neural Networks.
arXiv Detail & Related papers (2020-08-29T10:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.