MoDS: Model-oriented Data Selection for Instruction Tuning
- URL: http://arxiv.org/abs/2311.15653v1
- Date: Mon, 27 Nov 2023 09:33:13 GMT
- Title: MoDS: Model-oriented Data Selection for Instruction Tuning
- Authors: Qianlong Du, Chengqing Zong and Jiajun Zhang
- Abstract summary: We present a model-oriented data selection (MoDS) approach, which selects instruction data based on a new criteria considering three aspects: quality, coverage and necessity.
Experimental results show that, the model fine-tuned with 4,000 instruction pairs selected by our approach could perform better than the model fine-tuned with the full original dataset.
- Score: 35.60124047070829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instruction tuning has become the de facto method to equip large language
models (LLMs) with the ability of following user instructions. Usually,
hundreds of thousands or millions of instruction-following pairs are employed
to fine-tune the foundation LLMs. Recently, some studies show that a small
number of high-quality instruction data is enough. However, how to select
appropriate instruction data for a given LLM is still an open problem. To
address this problem, in this paper we present a model-oriented data selection
(MoDS) approach, which selects instruction data based on a new criteria
considering three aspects: quality, coverage and necessity. First, our approach
utilizes a quality evaluation model to filter out the high-quality subset from
the original instruction dataset, and then designs an algorithm to further
select from the high-quality subset a seed instruction dataset with good
coverage. The seed dataset is applied to fine-tune the foundation LLM to obtain
an initial instruction-following LLM. Finally, we develop a necessity
evaluation model to find out the instruction data which are performed badly in
the initial instruction-following LLM and consider them necessary instructions
to further improve the LLMs. In this way, we can get a small high-quality,
broad-coverage and high-necessity subset from the original instruction
datasets. Experimental results show that, the model fine-tuned with 4,000
instruction pairs selected by our approach could perform better than the model
fine-tuned with the full original dataset which includes 214k instruction data.
Related papers
- Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation [56.75665429851673]
This paper introduces a novel instruction curation algorithm, derived from two unique perspectives, human and LLM preference alignment.
Experiments demonstrate that we can maintain or even improve model performance by compressing synthetic multimodal instructions by up to 90%.
arXiv Detail & Related papers (2024-09-27T08:20:59Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - CodecLM: Aligning Language Models with Tailored Synthetic Data [51.59223474427153]
We introduce CodecLM, a framework for adaptively generating high-quality synthetic data for instruction-following abilities.
We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution.
We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples.
arXiv Detail & Related papers (2024-04-08T21:15:36Z) - A Survey on Data Selection for LLM Instruction Tuning [18.94987580516951]
We propose a new taxonomy of the data selection methods and provide a detailed introduction of recent advances.
We emphasize the open challenges and present new frontiers of this task.
arXiv Detail & Related papers (2024-02-04T13:32:01Z) - CoachLM: Automatic Instruction Revisions Improve the Data Quality in LLM Instruction Tuning [32.54921739100195]
We propose CoachLM, a novel approach to enhance the quality of instruction datasets through automatic revisions on samples in the dataset.
CoachLM is trained from the samples revised by human experts and significantly increases the proportion of high-quality samples in the dataset from 17.7% to 78.9%.
Results show that CoachLM improves the instruction-following capabilities of the instruction-tuned LLM by an average of 29.9%.
arXiv Detail & Related papers (2023-11-22T09:04:57Z) - Tuna: Instruction Tuning using Feedback from Large Language Models [74.04950416204551]
We propose finetuning an instruction-tuned large language model using our novel textitprobabilistic ranking and textitcontextual ranking approaches.
Probabilistic ranking enables the instruction-tuned model to inherit the relative rankings of high-quality and low-quality responses from the teacher LLM.
On the other hand, learning with contextual ranking allows the model to refine its own response distribution using the contextual understanding ability of stronger LLMs.
arXiv Detail & Related papers (2023-10-20T09:55:06Z) - MLLM-DataEngine: An Iterative Refinement Approach for MLLM [62.30753425449056]
We propose a novel closed-loop system that bridges data generation, model training, and evaluation.
Within each loop, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results.
For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data.
For quality, we resort to GPT-4 to generate high-quality data with each given data type.
arXiv Detail & Related papers (2023-08-25T01:41:04Z) - ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation [43.270424225285105]
We focus on adapting and empowering a pure large language model for zero-shot and few-shot recommendation tasks.
We propose Retrieval-enhanced Large Language models (ReLLa) for recommendation tasks in both zero-shot and few-shot settings.
arXiv Detail & Related papers (2023-08-22T02:25:04Z) - Instruction Mining: Instruction Data Selection for Tuning Large Language Models [18.378654454336136]
InstructMining is designed for automatically selecting premium instruction-following data for finetuning large language models.
We show that InstructMining achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
arXiv Detail & Related papers (2023-07-12T16:37:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.