Exploring Format Consistency for Instruction Tuning
- URL: http://arxiv.org/abs/2307.15504v2
- Date: Mon, 8 Jan 2024 13:26:37 GMT
- Title: Exploring Format Consistency for Instruction Tuning
- Authors: Shihao Liang, Runchu Tian, Kunlun Zhu, Yujia Qin, Huadong Wang, Xin
Cong, Zhiyuan Liu, Xiaojiang Liu, Maosong Sun
- Abstract summary: In this work, we propose a framework named Unified Instruction Tuning (UIT)
UIT calls OpenAI APIs for automatic format transfer among different instruction tuning datasets such as PromptSource, FLAN and CrossFit.
With the framework, we demonstrate the necessity of maintaining format consistency in instruction tuning; (2) improve the generalization performance on unseen instructions on T5-LM-xl; and (3) provide a novel perplexity-based denoising method to reduce the noise of automatic format transfer.
- Score: 79.0698403613366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instruction tuning has emerged as a promising approach to enhancing large
language models in following human instructions. It is shown that increasing
the diversity and number of instructions in the training data can consistently
enhance generalization performance, which facilitates a recent endeavor to
collect various instructions and integrate existing instruction tuning datasets
into larger collections. However, different users have their unique ways of
expressing instructions, and there often exist variations across different
datasets in the instruction styles and formats, i.e., format inconsistency. In
this work, we propose a framework named Unified Instruction Tuning (UIT), which
calls OpenAI APIs for automatic format transfer among different instruction
tuning datasets such as PromptSource, FLAN and CrossFit. With the framework, we
(1) demonstrate the necessity of maintaining format consistency in instruction
tuning; (2) improve the generalization performance on unseen instructions on
T5-LM-xl; (3) provide a novel perplexity-based denoising method to reduce the
noise of automatic format transfer to make the UIT framework more practical and
a smaller offline model based on GPT-J that achieves comparable format transfer
capability to OpenAI APIs to reduce costs in practice. Further analysis
regarding variations of targeted formats and other effects is intended.
Related papers
- Leveraging Unstructured Text Data for Federated Instruction Tuning of Large Language Models [45.139087558425395]
Federated instruction tuning enables multiple clients to collaboratively fine-tune a shared large language model (LLM)
Existing literature impractically requires that all the clients readily hold instruction-tuning data.
We propose a novel framework FedIT-U2S, which can automatically transform unstructured corpus into structured data for federated instruction tuning.
arXiv Detail & Related papers (2024-09-11T09:31:44Z) - MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity [80.02202386597138]
We construct a high-quality, diverse visual instruction tuning dataset MMInstruct, which consists of 973K instructions from 24 domains.
Our instruction generation engine enables semi-automatic, low-cost, and multi-domain instruction generation at the cost of manual construction.
arXiv Detail & Related papers (2024-07-22T17:55:22Z) - Phased Instruction Fine-Tuning for Large Language Models [12.037895935630882]
Phased Instruction Fine-Tuning (Phased IFT) is proposed, based on the idea that learning to follow instructions is a gradual process.
It assesses instruction difficulty using GPT-4, divides the instruction data into subsets of increasing difficulty, and uptrains the model sequentially on these subsets.
Experiments with Llama-2 7B/13B/70B, Llama3 8/70B and Mistral-7B models using Alpaca data show that Phased IFT significantly outperforms One-off IFT.
arXiv Detail & Related papers (2024-06-01T04:25:26Z) - Mosaic-IT: Free Compositional Data Augmentation Improves Instruction Tuning [30.82220015525281]
Mosaic Instruction Tuning (Mosaic-IT) is a human/model-free compositional data augmentation method.
Mosaic-IT randomly creates rich and diverse augmentations from existing instruction tuning data.
Our evaluations demonstrate a superior performance and training efficiency of Mosaic-IT.
arXiv Detail & Related papers (2024-05-22T04:08:20Z) - Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for
Large Language Models [125.91897197446379]
We find that MoE models benefit more from instruction tuning than dense models.
Our most powerful model, FLAN-MOE-32B, surpasses the performance of FLAN-PALM-62B on four benchmark tasks.
arXiv Detail & Related papers (2023-05-24T04:22:26Z) - Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation [92.2167864437497]
We propose Dynosaur, a dynamic growth paradigm for the automatic curation of instruction-tuning data.
Based on the metadata of existing datasets, we use LLMs to automatically construct instruction-tuning data by identifying relevant data fields and generating appropriate instructions.
By leveraging the existing annotated datasets, Dynosaur offers several advantages: 1) it reduces the API cost for generating instructions; 2) it provides high-quality data for instruction tuning; and 3) it supports the continuous improvement of models by generating instruction-tuning data when a new annotated dataset becomes available.
arXiv Detail & Related papers (2023-05-23T17:56:26Z) - Gradient-Regulated Meta-Prompt Learning for Generalizable
Vision-Language Models [137.74524357614285]
We introduce a novel Gradient-RegulAted Meta-prompt learning framework.
It helps pre-training models adapt to downstream tasks in a parameter -- and data -- efficient way.
GRAM can be easily incorporated into various prompt tuning methods in a model-agnostic way.
arXiv Detail & Related papers (2023-03-12T05:03:37Z) - GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large
Language Models [80.03815493269522]
GrIPS is a gradient-free, edit-based search approach for improving task instructions for large language models.
With InstructGPT models, GrIPS improves the average task performance by up to 4.30 percentage points on eight classification tasks.
We show our edits can simplify instructions and at times make them incoherent but nonetheless improve accuracy.
arXiv Detail & Related papers (2022-03-14T16:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.