CPT: Competence-progressive Training Strategy for Few-shot Node Classification
- URL: http://arxiv.org/abs/2402.00450v3
- Date: Fri, 18 Oct 2024 02:45:18 GMT
- Title: CPT: Competence-progressive Training Strategy for Few-shot Node Classification
- Authors: Qilong Yan, Yufeng Zhang, Jinghao Zhang, Jingpu Duan, Jian Yin,
- Abstract summary: Graph Neural Networks (GNNs) have made significant advancements in node classification, but their success relies on sufficient labeled nodes per class in the training data.
Traditional episodic meta-learning approaches have shown promise in this domain, but they face an inherent limitation.
We introduce CPT, a novel two-stage curriculum learning method that aligns task difficulty with the meta-learner's progressive competence.
- Score: 11.17199104891692
- License:
- Abstract: Graph Neural Networks (GNNs) have made significant advancements in node classification, but their success relies on sufficient labeled nodes per class in the training data. Real-world graph data often exhibits a long-tail distribution with sparse labels, emphasizing the importance of GNNs' ability in few-shot node classification, which entails categorizing nodes with limited data. Traditional episodic meta-learning approaches have shown promise in this domain, but they face an inherent limitation: it might lead the model to converge to suboptimal solutions because of random and uniform task assignment, ignoring task difficulty levels. This could lead the meta-learner to face complex tasks too soon, hindering proper learning. Ideally, the meta-learner should start with simple concepts and advance to more complex ones, like human learning. So, we introduce CPT, a novel two-stage curriculum learning method that aligns task difficulty with the meta-learner's progressive competence, enhancing overall performance. Specifically, in CPT's initial stage, the focus is on simpler tasks, fostering foundational skills for engaging with complex tasks later. Importantly, the second stage dynamically adjusts task difficulty based on the meta-learner's growing competence, aiming for optimal knowledge acquisition. Extensive experiments on popular node classification datasets demonstrate significant improvements of our strategy over existing methods.
Related papers
- Curriculum Learning for Graph Neural Networks: Which Edges Should We
Learn First [13.37867275976255]
We propose a novel strategy to incorporate more edges into training according to their difficulty from easy to hard.
We demonstrate the strength of our proposed method in improving the generalization ability and robustness of learned representations.
arXiv Detail & Related papers (2023-10-28T15:35:34Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Task-Equivariant Graph Few-shot Learning [7.78018583713337]
It is important for Graph Neural Networks (GNNs) to be able to classify nodes with a limited number of labeled nodes, known as few-shot node classification.
We propose a new approach, the Task-Equivariant Graph few-shot learning (TEG) framework.
Our TEG framework enables the model to learn transferable task-adaptation strategies using a limited number of training meta-tasks.
arXiv Detail & Related papers (2023-05-30T05:47:28Z) - Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - Task-Adaptive Few-shot Node Classification [49.79924004684395]
We propose a task-adaptive node classification framework under the few-shot learning setting.
Specifically, we first accumulate meta-knowledge across classes with abundant labeled nodes.
Then we transfer such knowledge to the classes with limited labeled nodes via our proposed task-adaptive modules.
arXiv Detail & Related papers (2022-06-23T20:48:27Z) - Generating meta-learning tasks to evolve parametric loss for
classification learning [1.1355370218310157]
In existing meta-learning approaches, learning tasks for training meta-models are usually collected from public datasets.
We propose a meta-learning approach based on randomly generated meta-learning tasks to obtain a parametric loss for classification learning based on big data.
arXiv Detail & Related papers (2021-11-20T13:07:55Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.