Meta-Meta Classification for One-Shot Learning
- URL: http://arxiv.org/abs/2004.08083v4
- Date: Sun, 14 Jun 2020 01:02:11 GMT
- Title: Meta-Meta Classification for One-Shot Learning
- Authors: Arkabandhu Chowdhury, Dipak Chaudhari, Swarat Chaudhuri, Chris
Jermaine
- Abstract summary: We present a new approach, called meta-meta classification, to learning in small-data settings.
In this approach, one uses a large set of learning problems to design an ensemble of learners, where each learner has high bias and low variance.
We evaluate the approach on a one-shot, one-class-versus-all classification task and show that it is able to outperform traditional meta-learning as well as ensembling approaches.
- Score: 11.27833234287093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new approach, called meta-meta classification, to learning in
small-data settings. In this approach, one uses a large set of learning
problems to design an ensemble of learners, where each learner has high bias
and low variance and is skilled at solving a specific type of learning problem.
The meta-meta classifier learns how to examine a given learning problem and
combine the various learners to solve the problem. The meta-meta learning
approach is especially suited to solving few-shot learning tasks, as it is
easier to learn to classify a new learning problem with little data than it is
to apply a learning algorithm to a small data set. We evaluate the approach on
a one-shot, one-class-versus-all classification task and show that it is able
to outperform traditional meta-learning as well as ensembling approaches.
Related papers
- On the Efficiency of Integrating Self-supervised Learning and
Meta-learning for User-defined Few-shot Keyword Spotting [51.41426141283203]
User-defined keyword spotting is a task to detect new spoken terms defined by users.
Previous works try to incorporate self-supervised learning models or apply meta-learning algorithms.
Our result shows that HuBERT combined with Matching network achieves the best result.
arXiv Detail & Related papers (2022-04-01T10:59:39Z) - Learning where to learn: Gradient sparsity in meta and continual
learning [4.845285139609619]
We show that meta-learning can be improved by letting the learning algorithm decide which weights to change.
We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis.
Our results shed light on an ongoing debate on whether meta-learning can discover adaptable features and suggest that learning by sparse gradient descent is a powerful inductive bias for meta-learning systems.
arXiv Detail & Related papers (2021-10-27T12:54:36Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Is Support Set Diversity Necessary for Meta-Learning? [14.231486872262531]
We propose a modification to traditional meta-learning approaches in which we keep the support sets fixed across tasks, thus reducing task diversity.
Surprisingly, we find that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta-learning methods.
arXiv Detail & Related papers (2020-11-28T02:28:42Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.