A Complete Survey on Contemporary Methods, Emerging Paradigms and Hybrid Approaches for Few-Shot Learning
- URL: http://arxiv.org/abs/2402.03017v3
- Date: Fri, 24 Jan 2025 13:36:52 GMT
- Title: A Complete Survey on Contemporary Methods, Emerging Paradigms and Hybrid Approaches for Few-Shot Learning
- Authors: Georgios Tsoumplekas, Vladislav Li, Panagiotis Sarigiannidis, Vasileios Argyriou,
- Abstract summary: Few-Shot Learning aims to enable rapid adaptation to novel learning tasks.
Recent trends shaping the field, outstanding challenges, and promising future research directions are discussed.
- Score: 5.8497833718980345
- License:
- Abstract: Despite the widespread success of deep learning, its intense requirements for vast amounts of data and extensive training make it impractical for various real-world applications where data is scarce. In recent years, Few-Shot Learning (FSL) has emerged as a learning paradigm that aims to address these limitations by leveraging prior knowledge to enable rapid adaptation to novel learning tasks. Due to its properties that highly complement deep learning's data-intensive needs, FSL has seen significant growth in the past few years. This survey provides a comprehensive overview of both well-established methods as well as recent advancements in the FSL field. The presented taxonomy extends previously proposed ones by incorporating emerging FSL paradigms, such as in-context learning, along with novel categories within the meta-learning paradigm for FSL, including neural processes and probabilistic meta-learning. Furthermore, a holistic overview of FSL is provided by discussing hybrid FSL approaches that extend FSL beyond the typically examined supervised learning setting. The survey also explores FSL's diverse applications across various domains. Finally, recent trends shaping the field, outstanding challenges, and promising future research directions are discussed.
Related papers
- Latest Advancements Towards Catastrophic Forgetting under Data Scarcity: A Comprehensive Survey on Few-Shot Class Incremental Learning [13.604908618597944]
Data scarcity significantly complicates the continual learning problem.
Recent progress of few-shot class incremental learning methods show insightful knowledge on how to tackle the problem.
Our extensive discussion presents the open challenges, potential solutions, and future directions of FSCIL.
arXiv Detail & Related papers (2025-02-12T07:39:44Z) - Federated Large Language Models: Current Progress and Future Directions [63.68614548512534]
This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions.
We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges.
arXiv Detail & Related papers (2024-09-24T04:14:33Z) - LSFSL: Leveraging Shape Information in Few-shot Learning [11.145085584637746]
Few-shot learning techniques seek to learn the underlying patterns in data using fewer samples, analogous to how humans learn from limited experience.
In this limited-data scenario, the challenges associated with deep neural networks, such as shortcut learning and texture bias behaviors, are further exacerbated.
We propose LSFSL, which enforces the model to learn more generalizable features utilizing the implicit prior information present in the data.
arXiv Detail & Related papers (2023-04-13T16:59:22Z) - Semi-Supervised and Unsupervised Deep Visual Learning: A Survey [76.2650734930974]
Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data.
We review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective.
arXiv Detail & Related papers (2022-08-24T04:26:21Z) - A Comprehensive Survey of Few-shot Learning: Evolution, Applications,
Challenges, and Opportunities [5.809416101410813]
Few-shot learning has emerged as an effective learning method and shows great potential.
We extensively investigated 200+ latest papers on FSL published in the past three years.
We propose a novel taxonomy to classify the existing work according to the level of abstraction of knowledge.
arXiv Detail & Related papers (2022-05-13T16:24:35Z) - A Framework of Meta Functional Learning for Regularising Knowledge
Transfer [89.74127682599898]
This work proposes a novel framework of Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks.
The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned.
arXiv Detail & Related papers (2022-03-28T15:24:09Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Graph-based Semi-supervised Learning: A Comprehensive Review [51.26862262550445]
Semi-supervised learning (SSL) has tremendous value in practice due to its ability to utilize both labeled data and unlabelled data.
An important class of SSL methods is to naturally represent data as graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.
GSSL methods have demonstrated their advantages in various domains due to their uniqueness of structure, the universality of applications, and their scalability to large scale data.
arXiv Detail & Related papers (2021-02-26T05:11:09Z) - Learning from Very Few Samples: A Survey [80.06120185496403]
Few sample learning is significant and challenging in the field of machine learning.
Few sample learning algorithms typically entail hundreds or thousands of supervised samples to guarantee generalization ability.
arXiv Detail & Related papers (2020-09-06T06:13:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.