Active Instruction Tuning: Improving Cross-Task Generalization by
Training on Prompt Sensitive Tasks
- URL: http://arxiv.org/abs/2311.00288v1
- Date: Wed, 1 Nov 2023 04:40:05 GMT
- Title: Active Instruction Tuning: Improving Cross-Task Generalization by
Training on Prompt Sensitive Tasks
- Authors: Po-Nien Kung, Fan Yin, Di Wu, Kai-Wei Chang, Nanyun Peng
- Abstract summary: Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
How to select new tasks to improve the performance and generalizability of IT models remains an open question.
We propose active instruction tuning based on prompt uncertainty, a novel framework to identify informative tasks, and then actively tune the models on the selected tasks.
- Score: 101.40633115037983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instruction tuning (IT) achieves impressive zero-shot generalization results
by training large language models (LLMs) on a massive amount of diverse tasks
with instructions. However, how to select new tasks to improve the performance
and generalizability of IT models remains an open question. Training on all
existing tasks is impractical due to prohibiting computation requirements, and
randomly selecting tasks can lead to suboptimal performance. In this work, we
propose active instruction tuning based on prompt uncertainty, a novel
framework to identify informative tasks, and then actively tune the models on
the selected tasks. We represent the informativeness of new tasks with the
disagreement of the current model outputs over perturbed prompts. Our
experiments on NIV2 and Self-Instruct datasets demonstrate that our method
consistently outperforms other baseline strategies for task selection,
achieving better out-of-distribution generalization with fewer training tasks.
Additionally, we introduce a task map that categorizes and diagnoses tasks
based on prompt uncertainty and prediction probability. We discover that
training on ambiguous (prompt-uncertain) tasks improves generalization while
training on difficult (prompt-certain and low-probability) tasks offers no
benefit, underscoring the importance of task selection for instruction tuning.
Related papers
- Instruction Matters: A Simple yet Effective Task Selection for Optimized Instruction Tuning of Specific Tasks [51.15473776489712]
We introduce a simple yet effective task selection method that leverages instruction information alone to identify relevant tasks.
Our method is significantly more efficient than traditional approaches, which require complex measurements of pairwise transferability between tasks or the creation of data samples for the target task.
Experimental results demonstrate that training on a small set of tasks, chosen solely on the instructions, results in substantial improvements in performance on benchmarks such as P3, Big-Bench, NIV2, and Big-Bench Hard.
arXiv Detail & Related papers (2024-04-25T08:49:47Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Active Task Randomization: Learning Robust Skills via Unsupervised
Generation of Diverse and Feasible Tasks [37.73239471412444]
We introduce Active Task Randomization (ATR), an approach that learns robust skills through the unsupervised generation of training tasks.
ATR selects suitable tasks, which consist of an initial environment state and manipulation goal, for learning robust skills by balancing the diversity and feasibility of the tasks.
We demonstrate that the learned skills can be composed by a task planner to solve unseen sequential manipulation problems based on visual inputs.
arXiv Detail & Related papers (2022-11-11T11:24:55Z) - Improving Task Generalization via Unified Schema Prompt [87.31158568180514]
Unified Prompt is a flexible and prompting method, which automatically customizes the learnable prompts for each task according to the task input schema.
It models the shared knowledge between tasks, while keeping the characteristics of different task schema.
The framework achieves strong zero-shot and few-shot performance on 16 unseen tasks downstream from 8 task types.
arXiv Detail & Related papers (2022-08-05T15:26:36Z) - Learning to generate imaginary tasks for improving generalization in
meta-learning [12.635773307074022]
The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks.
Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks.
In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss.
arXiv Detail & Related papers (2022-06-09T08:21:05Z) - Adaptive Procedural Task Generation for Hard-Exploration Problems [78.20918366839399]
We introduce Adaptive Procedural Task Generation (APT-Gen) to facilitate reinforcement learning in hard-exploration problems.
At the heart of our approach is a task generator that learns to create tasks from a parameterized task space via a black-box procedural generation module.
To enable curriculum learning in the absence of a direct indicator of learning progress, we propose to train the task generator by balancing the agent's performance in the generated tasks and the similarity to the target tasks.
arXiv Detail & Related papers (2020-07-01T09:38:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.