Boosting Natural Language Generation from Instructions with
Meta-Learning
- URL: http://arxiv.org/abs/2210.11617v1
- Date: Thu, 20 Oct 2022 22:23:23 GMT
- Title: Boosting Natural Language Generation from Instructions with
Meta-Learning
- Authors: Budhaditya Deb, Guoqing Zheng, Ahmed Hassan Awadallah
- Abstract summary: Recent work has shown that language models (LMs) trained with multi-task.
textitinstructional learning (MTIL) can solve diverse NLP.
tasks with improved performance compared to prompt tuning.
In this paper we investigate whether meta-learning applied to MTIL can further improve generalization to unseen tasks in a zero-shot setting.
- Score: 43.64522457686405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown that language models (LMs) trained with multi-task
\textit{instructional learning} (MTIL) can solve diverse NLP tasks in zero- and
few-shot settings with improved performance compared to prompt tuning. MTIL
illustrates that LMs can extract and use information about the task from
instructions beyond the surface patterns of the inputs and outputs. This
suggests that meta-learning may further enhance the utilization of instructions
for effective task transfer. In this paper we investigate whether meta-learning
applied to MTIL can further improve generalization to unseen tasks in a
zero-shot setting. Specifically, we propose to adapt meta-learning to MTIL in
three directions: 1) Model Agnostic Meta Learning (MAML), 2) Hyper-Network
(HNet) based adaptation to generate task specific parameters conditioned on
instructions, and 3) an approach combining HNet and MAML. Through extensive
experiments on the large scale Natural Instructions V2 dataset, we show that
our proposed approaches significantly improve over strong baselines in
zero-shot settings. In particular, meta-learning improves the effectiveness of
instructions and is most impactful when the test tasks are strictly zero-shot
(i.e. no similar tasks in the training set) and are "hard" for LMs,
illustrating the potential of meta-learning for MTIL for out-of-distribution
tasks.
Related papers
- SwitchCIT: Switching for Continual Instruction Tuning of Large Language Models [14.085371250265224]
Large language models (LLMs) have exhibited impressive capabilities in various domains, particularly in general language understanding.
However these models, trained on massive text data, may not be finely optimized for specific tasks triggered by instructions.
This work addresses the catastrophic forgetting in continual instruction learning for LLMs through a switching mechanism for routing computations to parameter-efficient tuned models.
arXiv Detail & Related papers (2024-07-16T14:37:33Z) - From Instance Training to Instruction Learning: Task Adapters Generation from Instructions [29.452006810725184]
This paper focuses on simulating human learning to address the shortcomings of instance training.
We introduce Task Adapters Generation from Instructions (TAGI), which automatically constructs the task-specific model.
We evaluate TAGI on the Super-Natural Instructions and P3 datasets.
arXiv Detail & Related papers (2024-06-18T08:14:28Z) - Instruction Tuning With Loss Over Instructions [42.9106826952674]
Instruction Modelling (IM) trains LMs by applying a loss function to the instruction and prompt part rather than solely to the output part.
We show that, in many scenarios, IM can effectively improve the LM performance on both NLP tasks and open-ended generation benchmarks.
Remarkably, in the most advantageous case, IM boosts model performance on AlpacaEval 1.0 by over 100%.
arXiv Detail & Related papers (2024-05-23T10:12:03Z) - TransPrompt v2: A Transferable Prompting Framework for Cross-task Text
Classification [37.824031151922604]
We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.
For learning across similar tasks, we employ a multi-task meta-knowledge acquisition (MMA) procedure to train a meta-learner.
For learning across distant tasks, we inject the task type descriptions into the prompt, and capture the intra-type and inter-type prompt embeddings.
arXiv Detail & Related papers (2023-08-29T04:16:57Z) - Instruction Position Matters in Sequence Generation with Large Language
Models [67.87516654892343]
Large language models (LLMs) are capable of performing conditional sequence generation tasks, such as translation or summarization.
We propose enhancing the instruction-following capability of LLMs by shifting the position of task instructions after the input sentences.
arXiv Detail & Related papers (2023-08-23T12:36:57Z) - MetaICL: Learning to Learn In Context [87.23056864536613]
We introduce MetaICL, a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learn-ing on a large set of training tasks.
We show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task training data, and outperforms much bigger models with nearly 8x parameters.
arXiv Detail & Related papers (2021-10-29T17:42:08Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.