Meta-learning for Few-shot Natural Language Processing: A Survey
- URL: http://arxiv.org/abs/2007.09604v1
- Date: Sun, 19 Jul 2020 06:36:41 GMT
- Title: Meta-learning for Few-shot Natural Language Processing: A Survey
- Authors: Wenpeng Yin
- Abstract summary: Few-shot natural language processing (NLP) refers to NLP tasks that are accompanied with merely a handful of labeled examples.
This paper focuses on NLP domain, especially few-shot applications.
We try to provide clearer definitions, progress summary and some common datasets of applying meta-learning to few-shot NLP.
- Score: 10.396506243272158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot natural language processing (NLP) refers to NLP tasks that are
accompanied with merely a handful of labeled examples. This is a real-world
challenge that an AI system must learn to handle. Usually we rely on collecting
more auxiliary information or developing a more efficient learning algorithm.
However, the general gradient-based optimization in high capacity models, if
training from scratch, requires many parameter-updating steps over a large
number of labeled examples to perform well (Snell et al., 2017). If the target
task itself cannot provide more information, how about collecting more tasks
equipped with rich annotations to help the model learning? The goal of
meta-learning is to train a model on a variety of tasks with rich annotations,
such that it can solve a new task using only a few labeled samples. The key
idea is to train the model's initial parameters such that the model has maximal
performance on a new task after the parameters have been updated through zero
or a couple of gradient steps. There are already some surveys for
meta-learning, such as (Vilalta and Drissi, 2002; Vanschoren, 2018; Hospedales
et al., 2020). Nevertheless, this paper focuses on NLP domain, especially
few-shot applications. We try to provide clearer definitions, progress summary
and some common datasets of applying meta-learning to few-shot NLP.
Related papers
- MoBYv2AL: Self-supervised Active Learning for Image Classification [57.4372176671293]
We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
arXiv Detail & Related papers (2023-01-04T10:52:02Z) - Grad2Task: Improved Few-shot Text Classification Using Gradients for
Task Representation [24.488427641442694]
We propose a novel conditional neural process-based approach for few-shot text classification.
Our key idea is to represent each task using gradient information from a base model.
Our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches.
arXiv Detail & Related papers (2022-01-27T15:29:30Z) - MetaICL: Learning to Learn In Context [87.23056864536613]
We introduce MetaICL, a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learn-ing on a large set of training tasks.
We show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task training data, and outperforms much bigger models with nearly 8x parameters.
arXiv Detail & Related papers (2021-10-29T17:42:08Z) - Meta-Regularization by Enforcing Mutual-Exclusiveness [0.8057006406834467]
We propose a regularization technique for meta-learning models that gives the model designer more control over the information flow during meta-training.
Our proposed regularization function shows an accuracy boost of $sim$ $36%$ on the Omniglot dataset.
arXiv Detail & Related papers (2021-01-24T22:57:19Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic
Parsing [85.35582118010608]
Task-oriented semantic parsing is a critical component of virtual assistants.
Recent advances in deep learning have enabled several approaches to successfully parse more complex queries.
We propose a novel method that outperforms a supervised neural model at a 10-fold data reduction.
arXiv Detail & Related papers (2020-10-07T17:47:53Z) - Self-Supervised Meta-Learning for Few-Shot Natural Language
Classification Tasks [40.97125791174191]
We propose a self-supervised approach to generate a large, rich, meta-learning task distribution from unlabeled text.
We show that this meta-training leads to better few-shot generalization than language-model pre-training followed by finetuning.
arXiv Detail & Related papers (2020-09-17T17:53:59Z) - Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems [74.8759568242933]
Task-oriented dialogue systems use four connected modules, namely, Natural Language Understanding (NLU), a Dialogue State Tracking (DST), Dialogue Policy (DP) and Natural Language Generation (NLG)
A research challenge is to learn each module with the least amount of samples given the high cost related to the data collection.
We evaluate the priming few-shot ability of language models in the NLU, DP and NLG tasks.
arXiv Detail & Related papers (2020-08-14T08:23:21Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z) - Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense
Disambiguation [26.296412053816233]
We propose a meta-learning framework for few-shot word sense disambiguation.
The goal is to learn to disambiguate unseen words from only a few labeled instances.
We extend several popular meta-learning approaches to this scenario, and analyze their strengths and weaknesses.
arXiv Detail & Related papers (2020-04-29T17:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.