Universal Graph Continual Learning
- URL: http://arxiv.org/abs/2308.13982v1
- Date: Sun, 27 Aug 2023 01:19:19 GMT
- Title: Universal Graph Continual Learning
- Authors: Thanh Duc Hoang, Do Viet Tung, Duy-Hung Nguyen, Bao-Sinh Nguyen, Huy
Hoang Nguyen, Hung Le
- Abstract summary: We focus on a universal approach wherein each data point in a task can be a node or a graph, and the task varies from node to graph classification.
We propose a novel method that enables graph neural networks to excel in this universal setting.
- Score: 22.010954622073598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address catastrophic forgetting issues in graph learning as incoming data
transits from one to another graph distribution. Whereas prior studies
primarily tackle one setting of graph continual learning such as incremental
node classification, we focus on a universal approach wherein each data point
in a task can be a node or a graph, and the task varies from node to graph
classification. We propose a novel method that enables graph neural networks to
excel in this universal setting. Our approach perseveres knowledge about past
tasks through a rehearsal mechanism that maintains local and global structure
consistency across the graphs. We benchmark our method against various
continual learning baselines in real-world graph datasets and achieve
significant improvement in average performance and forgetting across tasks.
Related papers
- Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning [53.81365215811222]
We provide a review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning.
We categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning.
We discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field.
arXiv Detail & Related papers (2024-02-26T07:52:40Z) - Towards Generalizability of Multi-Agent Reinforcement Learning in Graphs with Recurrent Message Passing [0.9353820277714449]
In decentralized approaches, agents operate within a given graph and make decisions based on partial or outdated observations.
This work focuses on generalizability and resolves the trade-off in observed neighborhood size with a continuous information flow in the whole graph.
Our approach can be used in a decentralized manner at runtime and in combination with a reinforcement learning algorithm of choice.
arXiv Detail & Related papers (2024-02-07T16:53:09Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities [128.55790219377315]
Graph neural networks have emerged as a leading architecture for many graph-level tasks.
graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph.
arXiv Detail & Related papers (2022-04-15T04:02:06Z) - Self-supervised Auxiliary Learning for Graph Neural Networks via
Meta-Learning [16.847149163314462]
We propose a novel self-supervised auxiliary learning framework to effectively learn graph neural networks.
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
arXiv Detail & Related papers (2021-03-01T05:52:57Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.