Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification
- URL: http://arxiv.org/abs/2007.05009v1
- Date: Thu, 9 Jul 2020 18:03:12 GMT
- Title: Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification
- Authors: Pengyu Yuan, Aryan Mobiny, Jahandar Jahanipour, Xiaoyang Li, Pietro
Antonio Cicalese, Badrinath Roysam, Vishal Patel, Maric Dragan, and Hien Van
Nguyen
- Abstract summary: We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
- Score: 8.998976678920236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (or DNNs) must constantly cope with distribution changes
in the input data when the task of interest or the data collection protocol
changes. Retraining a network from scratch to combat this issue poses a
significant cost. Meta-learning aims to deliver an adaptive model that is
sensitive to these underlying distribution changes, but requires many tasks
during the meta-training process. In this paper, we propose a tAsk-auGmented
actIve meta-LEarning (AGILE) method to efficiently adapt DNNs to new tasks by
using a small number of training examples. AGILE combines a meta-learning
algorithm with a novel task augmentation technique which we use to generate an
initial adaptive model. It then uses Bayesian dropout uncertainty estimates to
actively select the most difficult samples when updating the model to a new
task. This allows AGILE to learn with fewer tasks and a few informative
samples, achieving high performance with a limited dataset. We perform our
experiments using the brain cell classification task and compare the results to
a plain meta-learning model trained from scratch. We show that the proposed
task-augmented meta-learning framework can learn to classify new cell types
after a single gradient step with a limited number of training samples. We show
that active learning with Bayesian uncertainty can further improve the
performance when the number of training samples is extremely small. Using only
1% of the training data and a single update step, we achieved 90% accuracy on
the new cell type classification task, a 50% points improvement over a
state-of-the-art meta-learning algorithm.
Related papers
- Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning [119.70303730341938]
We propose ePisode cUrriculum inveRsion (ECI) during data-free meta training and invErsion calibRation following inner loop (ICFIL) during meta testing.
ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model.
We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner.
arXiv Detail & Related papers (2023-03-20T15:10:41Z) - TIDo: Source-free Task Incremental Learning in Non-stationary
Environments [0.0]
Updating a model-based agent to learn new target tasks requires us to store past training data.
Few-shot task incremental learning methods overcome the limitation of labeled target datasets.
We propose a one-shot task incremental learning approach that can adapt to non-stationary source and target tasks.
arXiv Detail & Related papers (2023-01-28T02:19:45Z) - Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for
Downstream Tasks [55.431048995662714]
We create a small model for a new task from the pruned models of similar tasks.
We show that a few fine-tuning steps on this model suffice to produce a promising pruned-model for the new task.
We develop a simple but effective ''Meta-Vote Pruning (MVP)'' method that significantly reduces the pruning iterations for a new task.
arXiv Detail & Related papers (2023-01-27T06:49:47Z) - Gradient-Based Meta-Learning Using Uncertainty to Weigh Loss for
Few-Shot Learning [5.691930884128995]
Model-Agnostic Meta-Learning (MAML) is one of the most successful meta-learning techniques for few-shot learning.
New method is proposed for task-specific learner adaptively learn to select parameters that minimize the loss of new tasks.
Method 1 generates weights by comparing meta-loss differences to improve the accuracy when there are few classes.
Method 2 introduces the homoscedastic uncertainty of each task to weigh multiple losses based on the original gradient descent.
arXiv Detail & Related papers (2022-08-17T08:11:51Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Generating meta-learning tasks to evolve parametric loss for
classification learning [1.1355370218310157]
In existing meta-learning approaches, learning tasks for training meta-models are usually collected from public datasets.
We propose a meta-learning approach based on randomly generated meta-learning tasks to obtain a parametric loss for classification learning based on big data.
arXiv Detail & Related papers (2021-11-20T13:07:55Z) - MetaICL: Learning to Learn In Context [87.23056864536613]
We introduce MetaICL, a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learn-ing on a large set of training tasks.
We show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task training data, and outperforms much bigger models with nearly 8x parameters.
arXiv Detail & Related papers (2021-10-29T17:42:08Z) - ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification [21.933876113300897]
We adopt an alternative approach by transfer learning on an ensemble of related tasks using prototypical networks under the meta-learning paradigm.
Using intent classification as a case study, we demonstrate that increasing variability in training tasks can significantly improve classification performance.
arXiv Detail & Related papers (2021-01-28T00:19:13Z) - Meta-Regularization by Enforcing Mutual-Exclusiveness [0.8057006406834467]
We propose a regularization technique for meta-learning models that gives the model designer more control over the information flow during meta-training.
Our proposed regularization function shows an accuracy boost of $sim$ $36%$ on the Omniglot dataset.
arXiv Detail & Related papers (2021-01-24T22:57:19Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.