Curriculum Meta-Learning for Few-shot Classification
- URL: http://arxiv.org/abs/2112.02913v1
- Date: Mon, 6 Dec 2021 10:29:23 GMT
- Title: Curriculum Meta-Learning for Few-shot Classification
- Authors: Emmanouil Stergiadis, Priyanka Agrawal, Oliver Squire
- Abstract summary: We propose an adaptation of the curriculum training framework, applicable to state-of-the-art meta learning techniques for few-shot classification.
Our experiments with the MAML algorithm on two few-shot image classification tasks show significant gains with the curriculum training framework.
- Score: 1.5039745292757671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an adaptation of the curriculum training framework, applicable to
state-of-the-art meta learning techniques for few-shot classification.
Curriculum-based training popularly attempts to mimic human learning by
progressively increasing the training complexity to enable incremental concept
learning. As the meta-learner's goal is learning how to learn from as few
samples as possible, the exact number of those samples (i.e. the size of the
support set) arises as a natural proxy of a given task's difficulty. We define
a simple yet novel curriculum schedule that begins with a larger support size
and progressively reduces it throughout training to eventually match the
desired shot-size of the test setup. This proposed method boosts the learning
efficiency as well as the generalization capability. Our experiments with the
MAML algorithm on two few-shot image classification tasks show significant
gains with the curriculum training framework. Ablation studies corroborate the
independence of our proposed method from the model architecture as well as the
meta-learning hyperparameters
Related papers
- Partner-Assisted Learning for Few-Shot Image Classification [54.66864961784989]
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation.
In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples.
We propose a two-stage training scheme, which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance.
arXiv Detail & Related papers (2021-09-15T22:46:19Z) - Trainable Class Prototypes for Few-Shot Learning [5.481942307939029]
We propose the trainable prototypes for distance measure instead of the artificial ones within the meta-training and task-training framework.
Also to avoid the disadvantages that the episodic meta-training brought, we adopt non-episodic meta-training based on self-supervised learning.
Our method achieves state-of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification dataset.
arXiv Detail & Related papers (2021-06-21T04:19:56Z) - Task Attended Meta-Learning for Few-Shot Learning [3.0724051098062097]
We introduce a training curriculum motivated by selective focus in humans, called task attended meta-training, to weight the tasks in a batch.
The comparisons of the models with their non-task-attended counterparts on complex datasets validate its effectiveness.
arXiv Detail & Related papers (2021-06-20T07:34:37Z) - Curriculum Learning: A Survey [65.31516318260759]
Curriculum learning strategies have been successfully employed in all areas of machine learning.
We construct a taxonomy of curriculum learning approaches by hand, considering various classification criteria.
We build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm.
arXiv Detail & Related papers (2021-01-25T20:08:32Z) - Few-Shot Image Classification via Contrastive Self-Supervised Learning [5.878021051195956]
We propose a new paradigm of unsupervised few-shot learning to repair the deficiencies.
We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning.
Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets.
arXiv Detail & Related papers (2020-08-23T02:24:31Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z) - Training few-shot classification via the perspective of minibatch and
pretraining [10.007569291231915]
Few-shot classification is a challenging task which aims to formulate the ability of humans to learn concepts from limited prior data.
Recent progress in few-shot classification has featured meta-learning.
We propose multi-episode and cross-way training techniques, which respectively correspond to the minibatch and pretraining in classification problems.
arXiv Detail & Related papers (2020-04-10T03:14:48Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.