Disentangle-based Continual Graph Representation Learning
- URL: http://arxiv.org/abs/2010.02565v4
- Date: Tue, 24 Nov 2020 06:33:45 GMT
- Title: Disentangle-based Continual Graph Representation Learning
- Authors: Xiaoyu Kou, Yankai Lin, Shaobo Liu, Peng Li, Jie Zhou, Yan Zhang
- Abstract summary: Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space.
Existing GE models are not practical in real-world applications since it overlooked the streaming nature of incoming data.
We propose a disentangle-based continual graph representation learning framework inspired by the human's ability to learn procedural knowledge.
- Score: 32.081943985875554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph embedding (GE) methods embed nodes (and/or edges) in graph into a
low-dimensional semantic space, and have shown its effectiveness in modeling
multi-relational data. However, existing GE models are not practical in
real-world applications since it overlooked the streaming nature of incoming
data. To address this issue, we study the problem of continual graph
representation learning which aims to continually train a GE model on new data
to learn incessantly emerging multi-relational data while avoiding
catastrophically forgetting old learned knowledge. Moreover, we propose a
disentangle-based continual graph representation learning (DiCGRL) framework
inspired by the human's ability to learn procedural knowledge. The experimental
results show that DiCGRL could effectively alleviate the catastrophic
forgetting problem and outperform state-of-the-art continual learning models.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Community-Centric Graph Unlearning [10.906555492206959]
We propose a novel Graph Structure Mapping Unlearning paradigm (GSMU) and a novel method based on it named Community-centric Graph Eraser (CGE)
CGE maps community subgraphs to nodes, thereby enabling the reconstruction of a node-level unlearning operation within a reduced mapped graph.
arXiv Detail & Related papers (2024-08-19T05:37:35Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Continual Learning on Graphs: Challenges, Solutions, and Opportunities [72.7886669278433]
We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
arXiv Detail & Related papers (2024-02-18T12:24:45Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Continual Learning on Dynamic Graphs via Parameter Isolation [40.96053483180836]
We propose Isolation GNN (PI-GNN) for continual learning on dynamic graphs.
We find parameters that correspond to unaffected patterns via optimization and freeze them to prevent them from being rewritten.
Experiments on eight real-world datasets corroborate the effectiveness of PI-GNN.
arXiv Detail & Related papers (2023-05-23T08:49:19Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Continual Graph Learning: A Survey [4.618696834991205]
Research on continual learning (CL) mainly focuses on data represented in the Euclidean space.
Most graph learning models are tailored for static graphs.
Catastrophic forgetting also emerges in graph learning models when being trained incrementally.
arXiv Detail & Related papers (2023-01-28T15:42:49Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.