Few-Shot Learning on Graphs: from Meta-learning to Pre-training and
Prompting
- URL: http://arxiv.org/abs/2402.01440v3
- Date: Sat, 2 Mar 2024 08:27:26 GMT
- Title: Few-Shot Learning on Graphs: from Meta-learning to Pre-training and
Prompting
- Authors: Xingtong Yu, Yuan Fang, Zemin Liu, Yuxia Wu, Zhihao Wen, Jianyuan Bo,
Xinming Zhang and Steven C.H. Hoi
- Abstract summary: This survey endeavors to synthesize recent developments, provide comparative insights, and identify future directions.
We systematically categorize existing studies into three major families: meta-learning approaches, pre-training approaches, and hybrid approaches.
We analyze the relationships among these methods and compare their strengths and limitations.
- Score: 56.25730255038747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning, a critical step in graph-centric tasks, has
seen significant advancements. Earlier techniques often operate in an
end-to-end setting, where performance heavily relies on the availability of
ample labeled data. This constraint has spurred the emergence of few-shot
learning on graphs, where only a few task-specific labels are available for
each task. Given the extensive literature in this field, this survey endeavors
to synthesize recent developments, provide comparative insights, and identify
future directions. We systematically categorize existing studies into three
major families: meta-learning approaches, pre-training approaches, and hybrid
approaches, with a finer-grained classification in each family to aid readers
in their method selection process. Within each category, we analyze the
relationships among these methods and compare their strengths and limitations.
Finally, we outline prospective future directions for few-shot learning on
graphs to catalyze continued innovation in this field.
Related papers
- Exploring Graph Classification Techniques Under Low Data Constraints: A
Comprehensive Study [0.0]
It covers various techniques for graph data augmentation, including node and edge perturbation, graph coarsening, and graph generation.
The paper explores these areas in depth and delves into further sub classifications.
It provides an extensive array of techniques that can be employed in solving graph processing problems faced in low-data scenarios.
arXiv Detail & Related papers (2023-11-21T17:23:05Z) - A Survey of Graph Meets Large Language Model: Progress and Future Directions [38.63080573825683]
Large Language Models (LLMs) have achieved tremendous success in various domains.
LLMs have been leveraged in graph-related tasks to surpass traditional Graph Neural Networks (GNNs) based methods.
arXiv Detail & Related papers (2023-11-21T07:22:48Z) - A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and
Future Directions [64.84521350148513]
Graphs represent interconnected structures prevalent in a myriad of real-world scenarios.
Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data.
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce.
This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes.
arXiv Detail & Related papers (2023-08-26T09:11:44Z) - Curriculum Graph Machine Learning: A Survey [51.89783017927647]
curriculum graph machine learning (Graph CL) integrates the strength of graph machine learning and curriculum learning.
This paper comprehensively overview approaches on Graph CL and present a detailed survey of recent advances in this direction.
arXiv Detail & Related papers (2023-02-06T16:59:25Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities [128.55790219377315]
Graph neural networks have emerged as a leading architecture for many graph-level tasks.
graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph.
arXiv Detail & Related papers (2022-04-15T04:02:06Z) - Self-supervised on Graphs: Contrastive, Generative,or Predictive [25.679620842010422]
Self-supervised learning (SSL) is emerging as a new paradigm for extracting informative knowledge through well-designed pretext tasks.
We divide existing graph SSL methods into three categories: contrastive, generative, and predictive.
We also summarize the commonly used datasets, evaluation metrics, downstream tasks, and open-source implementations of various algorithms.
arXiv Detail & Related papers (2021-05-16T03:30:03Z) - Graph Self-Supervised Learning: A Survey [73.86209411547183]
Self-supervised learning (SSL) has become a promising and trending learning paradigm for graph data.
We present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data.
arXiv Detail & Related papers (2021-02-27T03:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.