A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning
- URL: http://arxiv.org/abs/2203.15936v1
- Date: Tue, 29 Mar 2022 22:30:00 GMT
- Title: A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning
- Authors: Zhen Tan, Kaize Ding, Ruocheng Guo and Huan Liu
- Abstract summary: We propose a simple transductive fine-tuning based framework as a new paradigm for graph few-shot learning.
For pretraining, we propose a supervised contrastive learning framework with data augmentation strategies specific for few-shot node classification.
- Score: 38.66690010054665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, increasing attention has been devoted to the graph few-shot
learning problem, where the target novel classes only contain a few labeled
nodes. Among many existing endeavors, episodic meta-learning has become the
most prevailing paradigm, and its episodic emulation of the test environment is
believed to equip the graph neural network models with adaptability to novel
node classes. However, in the image domain, recent results have shown that
feature reuse is more likely to be the key of meta-learning to few-shot
extrapolation. Based on such observation, in this work, we propose a simple
transductive fine-tuning based framework as a new paradigm for graph few-shot
learning. In the proposed paradigm, a graph encoder backbone is pretrained with
base classes, and a simple linear classifier is fine-tuned by the few labeled
samples and is tasked to classify the unlabeled ones. For pretraining, we
propose a supervised contrastive learning framework with data augmentation
strategies specific for few-shot node classification to improve the
extrapolation of a GNN encoder. Finally, extensive experiments conducted on
three benchmark datasets demonstrate the superior advantage of our framework
over the state-of-the-art methods.
Related papers
- Graph Mining under Data scarcity [6.229055041065048]
We propose an Uncertainty Estimator framework that can be applied on top of any generic Graph Neural Networks (GNNs)
We train these models under the classic episodic learning paradigm in the $n$-way, $k$-shot fashion, in an end-to-end setting.
Our method outperforms the baselines, which demonstrates the efficacy of the Uncertainty Estimator for Few-shot node classification on graphs with a GNN.
arXiv Detail & Related papers (2024-06-07T10:50:03Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Inductive Linear Probing for Few-shot Node Classification [23.137926097692844]
We conduct an empirical study to highlight the limitations of current frameworks in the inductive few-shot node classification setting.
We propose a simple yet competitive baseline approach specifically tailored for inductive few-shot node classification tasks.
arXiv Detail & Related papers (2023-06-14T01:33:06Z) - Transductive Linear Probing: A Novel Framework for Few-Shot Node
Classification [56.17097897754628]
We show that transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol.
We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
arXiv Detail & Related papers (2022-12-11T21:10:34Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Looking back to lower-level information in few-shot learning [4.873362301533825]
We propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classification accuracy.
Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
arXiv Detail & Related papers (2020-05-27T20:32:13Z) - Adaptive-Step Graph Meta-Learner for Few-Shot Graph Classification [25.883839335786025]
We propose a novel framework consisting of a graph meta-learner, which uses GNNs based modules for fast adaptation on graph data.
Our framework gets state-of-the-art results on several few-shot graph classification tasks compared to baselines.
arXiv Detail & Related papers (2020-03-18T14:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.