Learning from Few Examples: A Summary of Approaches to Few-Shot Learning
- URL: http://arxiv.org/abs/2203.04291v1
- Date: Mon, 7 Mar 2022 23:15:21 GMT
- Title: Learning from Few Examples: A Summary of Approaches to Few-Shot Learning
- Authors: Archit Parnami and Minwoo Lee
- Abstract summary: Few-Shot Learning refers to the problem of learning the underlying pattern in the data just from a few training samples.
Deep learning solutions suffer from data hunger and extensively high computation time and resources.
Few-shot learning that could drastically reduce the turnaround time of building machine learning applications emerges as a low-cost solution.
- Score: 3.6930948691311016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-Shot Learning refers to the problem of learning the underlying pattern in
the data just from a few training samples. Requiring a large number of data
samples, many deep learning solutions suffer from data hunger and extensively
high computation time and resources. Furthermore, data is often not available
due to not only the nature of the problem or privacy concerns but also the cost
of data preparation. Data collection, preprocessing, and labeling are strenuous
human tasks. Therefore, few-shot learning that could drastically reduce the
turnaround time of building machine learning applications emerges as a low-cost
solution. This survey paper comprises a representative list of recently
proposed few-shot learning algorithms. Given the learning dynamics and
characteristics, the approaches to few-shot learning problems are discussed in
the perspectives of meta-learning, transfer learning, and hybrid approaches
(i.e., different variations of the few-shot learning problem).
Related papers
- Imitation Learning Inputting Image Feature to Each Layer of Neural
Network [1.6574413179773757]
Imitation learning enables robots to learn and replicate human behavior from training data.
Recent advances in machine learning enable end-to-end learning approaches that directly process high-dimensional observation data, such as images.
This paper presents a useful method to address this challenge, which amplifies the influence of data with a relatively low correlation to the output.
arXiv Detail & Related papers (2024-01-18T02:44:18Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Dynamic Task and Weight Prioritization Curriculum Learning for
Multimodal Imagery [0.5439020425819]
This paper explores post-disaster analytics using multimodal deep learning models trained with curriculum learning method.
Curriculum learning emulates the progressive learning sequence in human education by training deep learning models on increasingly complex data.
arXiv Detail & Related papers (2023-10-29T18:46:33Z) - On Inductive Biases for Machine Learning in Data Constrained Settings [0.0]
This thesis explores a different answer to the problem of learning expressive models in data constrained settings.
Instead of relying on big datasets to learn neural networks, we will replace some modules by known functions reflecting the structure of the data.
Our approach falls under the hood of "inductive biases", which can be defined as hypothesis on the data at hand restricting the space of models to explore.
arXiv Detail & Related papers (2023-02-21T14:22:01Z) - On Measuring the Intrinsic Few-Shot Hardness of Datasets [49.37562545777455]
We show that few-shot hardness may be intrinsic to datasets, for a given pre-trained model.
We propose a simple and lightweight metric called "Spread" that captures the intuition that few-shot learning is made possible.
Our metric better accounts for few-shot hardness compared to existing notions of hardness, and is 8-100x faster to compute.
arXiv Detail & Related papers (2022-11-16T18:53:52Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - When is Memorization of Irrelevant Training Data Necessary for
High-Accuracy Learning? [53.523017945443115]
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples.
Our results do not depend on the training algorithm or the class of models used for learning.
arXiv Detail & Related papers (2020-12-11T15:25:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.