Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method
for Few-shot Node Tasks
- URL: http://arxiv.org/abs/2309.10376v1
- Date: Tue, 19 Sep 2023 07:24:10 GMT
- Title: Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method
for Few-shot Node Tasks
- Authors: Hao Liu, Jiarui Feng, Lecheng Kong, Dacheng Tao, Yixin Chen, Muhan
Zhang
- Abstract summary: We introduce Contrastive Few-Shot Node Classification (COLA)
COLA uses graph augmentations to identify semantically similar nodes, which enables the construction of meta-tasks without the need for label information.
Through extensive experiments, we validate the essentiality of each component in our design and demonstrate that COLA achieves new state-of-the-art on all tasks.
- Score: 68.60884768323739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have become popular in Graph Representation
Learning (GRL). One fundamental application is few-shot node classification.
Most existing methods follow the meta learning paradigm, showing the ability of
fast generalization to few-shot tasks. However, recent works indicate that
graph contrastive learning combined with fine-tuning can significantly
outperform meta learning methods. Despite the empirical success, there is
limited understanding of the reasons behind it. In our study, we first identify
two crucial advantages of contrastive learning compared to meta learning,
including (1) the comprehensive utilization of graph nodes and (2) the power of
graph augmentations. To integrate the strength of both contrastive learning and
meta learning on the few-shot node classification tasks, we introduce a new
paradigm: Contrastive Few-Shot Node Classification (COLA). Specifically, COLA
employs graph augmentations to identify semantically similar nodes, which
enables the construction of meta-tasks without the need for label information.
Therefore, COLA can utilize all nodes to construct meta-tasks, further reducing
the risk of overfitting. Through extensive experiments, we validate the
essentiality of each component in our design and demonstrate that COLA achieves
new state-of-the-art on all tasks.
Related papers
- Meta-GPS++: Enhancing Graph Meta-Learning with Contrastive Learning and Self-Training [22.473322546354414]
We propose a novel framework for few-shot node classification called Meta-GPS++.
We first adopt an efficient method to learn discriminative node representations on homophilic and heterophilic graphs.
We also apply self-training to extract valuable information from unlabeled nodes.
arXiv Detail & Related papers (2024-07-20T03:05:12Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Task-Equivariant Graph Few-shot Learning [7.78018583713337]
It is important for Graph Neural Networks (GNNs) to be able to classify nodes with a limited number of labeled nodes, known as few-shot node classification.
We propose a new approach, the Task-Equivariant Graph few-shot learning (TEG) framework.
Our TEG framework enables the model to learn transferable task-adaptation strategies using a limited number of training meta-tasks.
arXiv Detail & Related papers (2023-05-30T05:47:28Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Few-shot Learning with Task-specific Structures [38.52226241144403]
Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs)
We propose a novel framework that learns a task-specific structure for each meta-task.
In this way, we can learn node representations with the task-specific structure tailored for each meta-task.
arXiv Detail & Related papers (2022-10-21T17:40:21Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.