Graph Few-shot Learning with Task-specific Structures
- URL: http://arxiv.org/abs/2210.12130v1
- Date: Fri, 21 Oct 2022 17:40:21 GMT
- Title: Graph Few-shot Learning with Task-specific Structures
- Authors: Song Wang, Chen Chen, Jundong Li
- Abstract summary: Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs)
We propose a novel framework that learns a task-specific structure for each meta-task.
In this way, we can learn node representations with the task-specific structure tailored for each meta-task.
- Score: 38.52226241144403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph few-shot learning is of great importance among various graph learning
tasks. Under the few-shot scenario, models are often required to conduct
classification given limited labeled samples. Existing graph few-shot learning
methods typically leverage Graph Neural Networks (GNNs) and perform
classification across a series of meta-tasks. Nevertheless, these methods
generally rely on the original graph (i.e., the graph that the meta-task is
sampled from) to learn node representations. Consequently, the graph structure
used in each meta-task is identical. Since the class sets are different across
meta-tasks, node representations should be learned in a task-specific manner to
promote classification performance. Therefore, to adaptively learn node
representations across meta-tasks, we propose a novel framework that learns a
task-specific structure for each meta-task. To handle the variety of nodes
across meta-tasks, we extract relevant nodes and learn task-specific structures
based on node influence and mutual information. In this way, we can learn node
representations with the task-specific structure tailored for each meta-task.
We further conduct extensive experiments on five node classification datasets
under both single- and multiple-graph settings to validate the superiority of
our framework over the state-of-the-art baselines. Our code is provided at
https://github.com/SongW-SW/GLITTER.
Related papers
- Meta-GPS++: Enhancing Graph Meta-Learning with Contrastive Learning and Self-Training [22.473322546354414]
We propose a novel framework for few-shot node classification called Meta-GPS++.
We first adopt an efficient method to learn discriminative node representations on homophilic and heterophilic graphs.
We also apply self-training to extract valuable information from unlabeled nodes.
arXiv Detail & Related papers (2024-07-20T03:05:12Z) - One for All: Towards Training One Graph Model for All Classification Tasks [61.656962278497225]
A unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain.
We propose textbfOne for All (OFA), the first general framework that can use a single graph model to address the above challenges.
OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.
arXiv Detail & Related papers (2023-09-29T21:15:26Z) - Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method
for Few-shot Node Tasks [68.60884768323739]
We introduce Contrastive Few-Shot Node Classification (COLA)
COLA uses graph augmentations to identify semantically similar nodes, which enables the construction of meta-tasks without the need for label information.
Through extensive experiments, we validate the essentiality of each component in our design and demonstrate that COLA achieves new state-of-the-art on all tasks.
arXiv Detail & Related papers (2023-09-19T07:24:10Z) - Contrastive Meta-Learning for Few-shot Node Classification [54.36506013228169]
Few-shot node classification aims to predict labels for nodes on graphs with only limited labeled nodes as references.
We create a novel contrastive meta-learning framework on graphs, named COSMIC, with two key designs.
arXiv Detail & Related papers (2023-06-27T02:22:45Z) - Task-Equivariant Graph Few-shot Learning [7.78018583713337]
It is important for Graph Neural Networks (GNNs) to be able to classify nodes with a limited number of labeled nodes, known as few-shot node classification.
We propose a new approach, the Task-Equivariant Graph few-shot learning (TEG) framework.
Our TEG framework enables the model to learn transferable task-adaptation strategies using a limited number of training meta-tasks.
arXiv Detail & Related papers (2023-05-30T05:47:28Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Task-Adaptive Few-shot Node Classification [49.79924004684395]
We propose a task-adaptive node classification framework under the few-shot learning setting.
Specifically, we first accumulate meta-knowledge across classes with abundant labeled nodes.
Then we transfer such knowledge to the classes with limited labeled nodes via our proposed task-adaptive modules.
arXiv Detail & Related papers (2022-06-23T20:48:27Z) - Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach [5.629161809575013]
We propose a novel training strategy for graph representation learning, based on meta-learning.
Our method avoids the difficulties arising when learning to perform multiple tasks concurrently.
We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
arXiv Detail & Related papers (2022-01-10T12:58:46Z) - A Meta-Learning Approach for Graph Representation Learning in Multi-Task
Settings [7.025709586759655]
We propose a novel meta-learning strategy capable of producing multi-task node embeddings.
We show that the embeddings produced by our method can be used to perform multiple tasks with comparable or higher performance than classically trained models.
arXiv Detail & Related papers (2020-12-12T08:36:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.