Learning from Few Samples: A Survey
- URL: http://arxiv.org/abs/2007.15484v1
- Date: Thu, 30 Jul 2020 14:28:57 GMT
- Title: Learning from Few Samples: A Survey
- Authors: Nihar Bendre, Hugo Terashima Mar\'in, and Peyman Najafirad
- Abstract summary: We study the existing few-shot meta-learning techniques in the computer vision domain based on their method and evaluation metrics.
We provide a taxonomy for the techniques and categorize them as data-augmentation, embedding, optimization and semantics based learning for few-shot, one-shot and zero-shot settings.
- Score: 1.4146420810689422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been able to outperform humans in some cases like
image recognition and image classification. However, with the emergence of
various novel categories, the ability to continuously widen the learning
capability of such networks from limited samples, still remains a challenge.
Techniques like Meta-Learning and/or few-shot learning showed promising
results, where they can learn or generalize to a novel category/task based on
prior knowledge. In this paper, we perform a study of the existing few-shot
meta-learning techniques in the computer vision domain based on their method
and evaluation metrics. We provide a taxonomy for the techniques and categorize
them as data-augmentation, embedding, optimization and semantics based learning
for few-shot, one-shot and zero-shot settings. We then describe the seminal
work done in each category and discuss their approach towards solving the
predicament of learning from few samples. Lastly we provide a comparison of
these techniques on the commonly used benchmark datasets: Omniglot, and
MiniImagenet, along with a discussion towards the future direction of improving
the performance of these techniques towards the final goal of outperforming
humans.
Related papers
- Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Meta Navigator: Search for a Good Adaptation Policy for Few-shot
Learning [113.05118113697111]
Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data.
Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios.
We present Meta Navigator, a framework that attempts to solve the limitation in few-shot learning by seeking a higher-level strategy.
arXiv Detail & Related papers (2021-09-13T07:20:01Z) - Deep Metric Learning for Few-Shot Image Classification: A Selective
Review [38.71276383292809]
Few-shot image classification is a challenging problem which aims to achieve the human level of recognition based only on a small number of images.
Deep learning algorithms such as meta-learning, transfer learning, and metric learning have been employed recently and achieved the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-17T20:27:59Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Generalized Few-Shot Video Classification with Video Retrieval and
Feature Generation [132.82884193921535]
We argue that previous methods underestimate the importance of video feature learning and propose a two-stage approach.
We show that this simple baseline approach outperforms prior few-shot video classification methods by over 20 points on existing benchmarks.
We present two novel approaches that yield further improvement.
arXiv Detail & Related papers (2020-07-09T13:05:32Z) - Looking back to lower-level information in few-shot learning [4.873362301533825]
We propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classification accuracy.
Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
arXiv Detail & Related papers (2020-05-27T20:32:13Z) - Knowledge Guided Metric Learning for Few-Shot Text Classification [22.832467388279873]
We propose to introduce external knowledge into few-shot learning to imitate human knowledge.
Inspired by human intelligence, we propose to introduce external knowledge into few-shot learning to imitate human knowledge.
We demonstrate that our method outperforms the state-of-the-art few-shot text classification models.
arXiv Detail & Related papers (2020-04-04T10:56:26Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.