Benchmarking Sensitivity of Continual Graph Learning for Skeleton-Based
Action Recognition
- URL: http://arxiv.org/abs/2401.18054v1
- Date: Wed, 31 Jan 2024 18:20:42 GMT
- Title: Benchmarking Sensitivity of Continual Graph Learning for Skeleton-Based
Action Recognition
- Authors: Wei Wei, Tom De Schepper, Kevin Mets
- Abstract summary: Continual learning (CL) aims to build machine learning models that can accumulate knowledge continuously over different tasks without retraining from scratch.
Previous studies have shown that pre-training graph neural networks (GNN) may lead to negative transfer after fine-tuning.
We propose the first continual graph learning benchmark for continual graph learning setting.
- Score: 6.14431765787048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning (CL) is the research field that aims to build machine
learning models that can accumulate knowledge continuously over different tasks
without retraining from scratch. Previous studies have shown that pre-training
graph neural networks (GNN) may lead to negative transfer (Hu et al., 2020)
after fine-tuning, a setting which is closely related to CL. Thus, we focus on
studying GNN in the continual graph learning (CGL) setting. We propose the
first continual graph learning benchmark for spatio-temporal graphs and use it
to benchmark well-known CGL methods in this novel setting. The benchmark is
based on the N-UCLA and NTU-RGB+D datasets for skeleton-based action
recognition. Beyond benchmarking for standard performance metrics, we study the
class and task-order sensitivity of CGL methods, i.e., the impact of learning
order on each class/task's performance, and the architectural sensitivity of
CGL methods with backbone GNN at various widths and depths. We reveal that
task-order robust methods can still be class-order sensitive and observe
results that contradict previous empirical observations on architectural
sensitivity in CL.
Related papers
- Continual Learning on Graphs: Challenges, Solutions, and Opportunities [72.7886669278433]
We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
arXiv Detail & Related papers (2024-02-18T12:24:45Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Graph Contrastive Learning for Skeleton-based Action Recognition [85.86820157810213]
We propose a graph contrastive learning framework for skeleton-based action recognition.
SkeletonGCL associates graph learning across sequences by enforcing graphs to be class-discriminative.
SkeletonGCL establishes a new training paradigm, and it can be seamlessly incorporated into current graph convolutional networks.
arXiv Detail & Related papers (2023-01-26T02:09:16Z) - Self-Supervised Graph Structure Refinement for Graph Neural Networks [31.924317784535155]
Graph structure learning (GSL) aims to learn the adjacency matrix for graph neural networks (GNNs)
Most existing GSL works apply a joint learning framework where the estimated adjacency matrix and GNN parameters are optimized for downstream tasks.
We propose a graph structure refinement (GSR) framework with a pretrain-finetune pipeline.
arXiv Detail & Related papers (2022-11-12T02:01:46Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Tackling Oversmoothing of GNNs with Contrastive Learning [35.88575306925201]
Graph neural networks (GNNs) integrate the comprehensive relation of graph data and representation learning capability.
Oversmoothing makes the final representations of nodes indiscriminative, thus deteriorating the node classification and link prediction performance.
We propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining all three mentioned metrics.
arXiv Detail & Related papers (2021-10-26T15:56:16Z) - Continual Learning with Gated Incremental Memories for sequential data
processing [14.657656286730736]
The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions.
This work proposes a Recurrent Neural Network (RNN) model for CL that is able to deal with concept drift in input distribution without forgetting previously acquired knowledge.
arXiv Detail & Related papers (2020-04-08T16:00:20Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks with
Experience Replay [16.913443823792022]
Graph Neural Networks (GNNs) have recently received significant research attention due to their superior performance on a variety of graph-related learning tasks.
In this work, we investigate the question can GNNs be applied to continuously learning tasks?
arXiv Detail & Related papers (2020-03-22T14:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.