Reinforced Continual Learning for Graphs
- URL: http://arxiv.org/abs/2209.01556v1
- Date: Sun, 4 Sep 2022 07:49:59 GMT
- Title: Reinforced Continual Learning for Graphs
- Authors: Appan Rakaraddi, Siew Kei Lam, Mahardhika Pratama, Marcus De Carvalho
- Abstract summary: This paper proposes a graph continual learning strategy that combines the architecture-based and memory-based approaches.
It is numerically validated with several graph continual learning benchmark problems in both task-incremental learning and class-incremental learning settings.
- Score: 18.64268861430314
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) have become the backbone for a myriad of tasks
pertaining to graphs and similar topological data structures. While many works
have been established in domains related to node and graph
classification/regression tasks, they mostly deal with a single task. Continual
learning on graphs is largely unexplored and existing graph continual learning
approaches are limited to the task-incremental learning scenarios. This paper
proposes a graph continual learning strategy that combines the
architecture-based and memory-based approaches. The structural learning
strategy is driven by reinforcement learning, where a controller network is
trained in such a way to determine an optimal number of nodes to be
added/pruned from the base network when new tasks are observed, thus assuring
sufficient network capacities. The parameter learning strategy is underpinned
by the concept of Dark Experience replay method to cope with the catastrophic
forgetting problem. Our approach is numerically validated with several graph
continual learning benchmark problems in both task-incremental learning and
class-incremental learning settings. Compared to recently published works, our
approach demonstrates improved performance in both the settings. The
implementation code can be found at \url{https://github.com/codexhammer/gcl}.
Related papers
- A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - DOTIN: Dropping Task-Irrelevant Nodes for GNNs [119.17997089267124]
Recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning.
We design a new approach called DOTIN (underlineDrunderlineopping underlineTask-underlineIrrelevant underlineNodes) to reduce the size of graphs.
Our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance.
arXiv Detail & Related papers (2022-04-28T12:00:39Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach [5.629161809575013]
We propose a novel training strategy for graph representation learning, based on meta-learning.
Our method avoids the difficulties arising when learning to perform multiple tasks concurrently.
We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
arXiv Detail & Related papers (2022-01-10T12:58:46Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.