Graph Continual Learning with Debiased Lossless Memory Replay
- URL: http://arxiv.org/abs/2404.10984v2
- Date: Tue, 15 Oct 2024 04:11:00 GMT
- Title: Graph Continual Learning with Debiased Lossless Memory Replay
- Authors: Chaoxi Niu, Guansong Pang, Ling Chen,
- Abstract summary: Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical.
Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks.
Memory replay-based methods, which aim to replay data of previous tasks when learning new tasks, have been explored as one principled approach to mitigate the forgetting of the knowledge learned from the previous tasks.
In this paper we extend this methodology with a novel framework, called Debiased Lossless Memory replay (DeLo
- Score: 24.67200419367994
- License:
- Abstract: Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks. Memory replay-based methods, which aim to replay data of previous tasks when learning new tasks, have been explored as one principled approach to mitigate the forgetting of the knowledge learned from the previous tasks. In this paper we extend this methodology with a novel framework, called Debiased Lossless Memory replay (DeLoMe). Unlike existing methods that sample nodes/edges of previous graphs to construct the memory, DeLoMe learns small lossless synthetic node representations as the memory. The learned memory can not only preserve the graph data privacy but also capture the holistic graph information, for which the sampling-based methods are not viable. Further, prior methods suffer from bias toward the current task due to the data imbalance between the classes in the memory data and the current data. A debiased GCL loss function is devised in DeLoMe to effectively alleviate this bias. Extensive experiments on four graph datasets show the effectiveness of DeLoMe under both class- and task-incremental learning settings.
Related papers
- Prompt-Driven Continual Graph Learning [35.58675758528851]
Continual Graph Learning (CGL) aims to accommodate new tasks over evolving graph data without forgetting prior knowledge.
This paper introduces a novel prompt-driven continual graph learning framework, which learns a separate prompt for each incoming task and maintains the underlying graph neural network model fixed.
arXiv Detail & Related papers (2025-02-10T10:28:11Z) - Graph Memory Learning: Imitating Lifelong Remembering and Forgetting of Brain Networks [31.554027786868815]
This paper introduces a new concept of graph memory learning - Brain-inspired Graph Memory Learning (BGML)
BGML incorporates a multi-granular hierarchical progressive learning mechanism rooted in feature graph grain learning to mitigate potential conflict between memorization and forgetting.
In addition, to tackle the issue of unreliable structures in newly added incremental information, the paper introduces an information self-assessment ownership mechanism.
arXiv Detail & Related papers (2024-07-27T05:50:54Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - CaT: Balanced Continual Graph Learning with Graph Condensation [29.7368211701716]
Continual graph learning (CGL) is purposed to continuously update a graph model with graph data being fed in a streaming manner.
Recent replay-based methods intend to solve this problem by updating the model using both the entire new-coming data and a memory bank that stores replayed graphs.
To solve these issues, a Condense and Train framework is proposed in this paper.
arXiv Detail & Related papers (2023-09-18T03:28:49Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - DOTIN: Dropping Task-Irrelevant Nodes for GNNs [119.17997089267124]
Recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning.
We design a new approach called DOTIN (underlineDrunderlineopping underlineTask-underlineIrrelevant underlineNodes) to reduce the size of graphs.
Our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance.
arXiv Detail & Related papers (2022-04-28T12:00:39Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.