Few-Shot Learning on Graphs: A Survey
- URL: http://arxiv.org/abs/2203.09308v1
- Date: Thu, 17 Mar 2022 13:21:11 GMT
- Title: Few-Shot Learning on Graphs: A Survey
- Authors: Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye,
Nitesh V. Chawla, Huan Liu
- Abstract summary: Graph representation learning has attracted tremendous attention due to its remarkable performance in many real-world applications.
semi-supervised graph representation learning models for specific tasks often suffer from label sparsity issue.
Few-shot learning on graphs (FSLG) has been proposed to tackle the performance degradation in face of limited annotated data challenge.
- Score: 92.47605211946149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning has attracted tremendous attention due to its
remarkable performance in many real-world applications. However, prevailing
(semi-)supervised graph representation learning models for specific tasks often
suffer from label sparsity issue as data labeling is always time and resource
consuming. In light of this, few-shot learning on graphs (FSLG), which combines
the strengths of graph representation learning and few-shot learning together,
has been proposed to tackle the performance degradation in face of limited
annotated data challenge. There have been many studies working on FSLG
recently. In this paper, we comprehensively survey these work in the form of a
series of methods and applications. Specifically, we first introduce FSLG
challenges and bases, then categorize and summarize existing work of FSLG in
terms of three major graph mining tasks at different granularity levels, i.e.,
node, edge, and graph. Finally, we share our thoughts on some future research
directions of FSLG. The authors of this survey have contributed significantly
to the AI literature on FSLG over the last few years.
Related papers
- Continual Learning on Graphs: Challenges, Solutions, and Opportunities [72.7886669278433]
We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
arXiv Detail & Related papers (2024-02-18T12:24:45Z) - A Survey of Data-Efficient Graph Learning [16.053913182723143]
We introduce a novel concept of Data-Efficient Graph Learning (DEGL) as a research frontier.
We systematically review recent advances on several key aspects, including self-supervised graph learning, semi-supervised graph learning, and few-shot graph learning.
arXiv Detail & Related papers (2024-02-01T09:28:48Z) - Graph Domain Adaptation: Challenges, Progress and Prospects [61.9048172631524]
We propose graph domain adaptation as an effective knowledge-transfer paradigm across graphs.
GDA introduces a bunch of task-related graphs as source graphs and adapts the knowledge learnt from source graphs to the target graphs.
We outline the research status and challenges, propose a taxonomy, introduce the details of representative works, and discuss the prospects.
arXiv Detail & Related papers (2024-02-01T02:44:32Z) - Class-Imbalanced Learning on Graphs: A Survey [16.175306073813235]
This survey aims to offer a comprehensive understanding of the current state-of-the-art in class-imbalanced learning on graphs (CILG)
We introduce the first taxonomy of existing work and its connection to existing imbalanced learning literature.
We critically analyze recent work in CILG and discuss urgent lines of inquiry within the topic.
arXiv Detail & Related papers (2023-04-09T19:21:46Z) - Counterfactual Learning on Graphs: A Survey [34.47646823407408]
Graph neural networks (GNNs) have achieved great success in representation learning on graphs.
Counterfactual learning on graphs has shown promising results in alleviating these drawbacks.
Various approaches have been proposed for counterfactual fairness, explainability, link prediction and other applications on graphs.
arXiv Detail & Related papers (2023-04-03T21:42:42Z) - Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities [128.55790219377315]
Graph neural networks have emerged as a leading architecture for many graph-level tasks.
graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph.
arXiv Detail & Related papers (2022-04-15T04:02:06Z) - Graph Self-Supervised Learning: A Survey [73.86209411547183]
Self-supervised learning (SSL) has become a promising and trending learning paradigm for graph data.
We present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data.
arXiv Detail & Related papers (2021-02-27T03:04:21Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.