Continual Learning on Graphs: Challenges, Solutions, and Opportunities
- URL: http://arxiv.org/abs/2402.11565v1
- Date: Sun, 18 Feb 2024 12:24:45 GMT
- Title: Continual Learning on Graphs: Challenges, Solutions, and Opportunities
- Authors: Xikun Zhang, Dongjin Song, Dacheng Tao
- Abstract summary: We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
- Score: 72.7886669278433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning on graph data has recently attracted paramount attention
for its aim to resolve the catastrophic forgetting problem on existing tasks
while adapting the sequentially updated model to newly emerged graph tasks.
While there have been efforts to summarize progress on continual learning
research over Euclidean data, e.g., images and texts, a systematic review of
progress in continual learning on graphs, a.k.a, continual graph learning (CGL)
or lifelong graph learning, is still demanding. Graph data are far more complex
in terms of data structures and application scenarios, making CGL task
settings, model designs, and applications extremely challenging. To bridge the
gap, we provide a comprehensive review of existing continual graph learning
(CGL) algorithms by elucidating the different task settings and categorizing
the existing methods based on their characteristics. We compare the CGL methods
with traditional continual learning techniques and analyze the applicability of
the traditional continual learning techniques to CGL tasks. Additionally, we
review the benchmark works that are crucial to CGL research. Finally, we
discuss the remaining challenges and propose several future directions. We will
maintain an up-to-date GitHub repository featuring a comprehensive list of CGL
algorithms, accessible at
https://github.com/UConn-DSIS/Survey-of-Continual-Learning-on-Graphs.
Related papers
- E-CGL: An Efficient Continual Graph Learner [30.757231591601997]
In continual graph learning, graphs evolve based on streaming graph data.
Continual graph learning presents unique challenges that require adaptive and efficient graph learning methods.
We produce an Efficient Continual Graph Learner (E-CGL) in this paper.
arXiv Detail & Related papers (2024-08-18T04:10:30Z) - A Survey of Data-Efficient Graph Learning [16.053913182723143]
We introduce a novel concept of Data-Efficient Graph Learning (DEGL) as a research frontier.
We systematically review recent advances on several key aspects, including self-supervised graph learning, semi-supervised graph learning, and few-shot graph learning.
arXiv Detail & Related papers (2024-02-01T09:28:48Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Counterfactual Learning on Graphs: A Survey [34.47646823407408]
Graph neural networks (GNNs) have achieved great success in representation learning on graphs.
Counterfactual learning on graphs has shown promising results in alleviating these drawbacks.
Various approaches have been proposed for counterfactual fairness, explainability, link prediction and other applications on graphs.
arXiv Detail & Related papers (2023-04-03T21:42:42Z) - Continual Graph Learning: A Survey [4.618696834991205]
Research on continual learning (CL) mainly focuses on data represented in the Euclidean space.
Most graph learning models are tailored for static graphs.
Catastrophic forgetting also emerges in graph learning models when being trained incrementally.
arXiv Detail & Related papers (2023-01-28T15:42:49Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - CogDL: A Comprehensive Library for Graph Deep Learning [55.694091294633054]
We present CogDL, a library for graph deep learning that allows researchers and practitioners to conduct experiments, compare methods, and build applications with ease and efficiency.
In CogDL, we propose a unified design for the training and evaluation of GNN models for various graph tasks, making it unique among existing graph learning libraries.
We develop efficient sparse operators for CogDL, enabling it to become the most competitive graph library for efficiency.
arXiv Detail & Related papers (2021-03-01T12:35:16Z) - Graph Self-Supervised Learning: A Survey [73.86209411547183]
Self-supervised learning (SSL) has become a promising and trending learning paradigm for graph data.
We present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data.
arXiv Detail & Related papers (2021-02-27T03:04:21Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.