Rethinking the Instruction Quality: LIFT is What You Need
- URL: http://arxiv.org/abs/2312.11508v2
- Date: Wed, 27 Dec 2023 08:23:14 GMT
- Title: Rethinking the Instruction Quality: LIFT is What You Need
- Authors: Yang Xu, Yongqiang Yao, Yufan Huang, Mengnan Qi, Maoquan Wang, Bin Gu,
Neel Sundaresan
- Abstract summary: Existing quality improvement methods alter instruction data through dataset expansion or curation.
We propose LIFT (LLM Instruction Fusion Transfer), a novel and versatile paradigm designed to elevate the instruction quality to new heights.
Experimental results demonstrate that, even with a limited quantity of high-quality instruction data selected by our paradigm, LLMs consistently uphold robust performance across various tasks.
- Score: 20.829372251475476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instruction tuning, a specialized technique to enhance large language model
(LLM) performance via instruction datasets, relies heavily on the quality of
employed data. Existing quality improvement methods alter instruction data
through dataset expansion or curation. However, the expansion method risks data
redundancy, potentially compromising LLM performance, while the curation
approach confines the LLM's potential to the original dataset. Our aim is to
surpass the original data quality without encountering these shortcomings. To
achieve this, we propose LIFT (LLM Instruction Fusion Transfer), a novel and
versatile paradigm designed to elevate the instruction quality to new heights.
LIFT strategically broadens data distribution to encompass more high-quality
subspaces and eliminates redundancy, concentrating on high-quality segments
across overall data subspaces. Experimental results demonstrate that, even with
a limited quantity of high-quality instruction data selected by our paradigm,
LLMs not only consistently uphold robust performance across various tasks but
also surpass some state-of-the-art results, highlighting the significant
improvement in instruction quality achieved by our paradigm.
Related papers
- Data Quality Control in Federated Instruction-tuning of Large Language Models [43.29678396558287]
We propose a new framework of federated instruction tuning of large language models (LLMs) with data quality control (FedDQC)
Our approach introduces an efficient metric to assess each client's instruction-response alignment (IRA), identifying potentially noisy data through single-shot inference.
We conduct extensive experiments on 4 synthetic and a real-world dataset, and compare our method with baselines adapted from centralized setting.
arXiv Detail & Related papers (2024-10-15T12:14:57Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Empowering Large Language Models for Textual Data Augmentation [23.483960932358396]
Large language models (LLMs) can potentially act as a powerful tool for textual data augmentation.
This work proposes a new solution, which can automatically generate a large pool of augmentation instructions and select the most suitable task-informed instructions.
Empirically, the proposed approach consistently generates augmented data with better quality compared to non-LLM and LLM-based data augmentation methods.
arXiv Detail & Related papers (2024-04-26T18:04:25Z) - SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning [16.307467144690683]
Large Language Models can achieve desirable performance with only a small amount of high-quality data.
Identifying high-quality data from vast datasets to curate small yet effective datasets has emerged as a critical challenge.
We introduce SHED, an automated dataset refinement framework based on Shapley value for instruction fine-tuning.
arXiv Detail & Related papers (2024-04-23T04:56:48Z) - LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named
Entity Recognition [67.96794382040547]
$LLM-DA$ is a novel data augmentation technique based on large language models (LLMs) for the few-shot NER task.
Our approach involves employing 14 contextual rewriting strategies, designing entity replacements of the same type, and incorporating noise injection to enhance robustness.
arXiv Detail & Related papers (2024-02-22T14:19:56Z) - How to Train Data-Efficient LLMs [56.41105687693619]
We study data-efficient approaches for pre-training language models (LLMs)
We find that Ask-LLM and Density sampling are the best methods in their respective categories.
In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories.
arXiv Detail & Related papers (2024-02-15T02:27:57Z) - Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes [57.62036621319563]
We introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime.
We demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators.
arXiv Detail & Related papers (2023-12-19T12:34:46Z) - Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning [79.32236399694077]
Low-quality data in the training set are usually detrimental to instruction tuning.
We propose a novel method, termed "reflection-tuning"
This approach utilizes an oracle LLM to recycle the original training data by introspecting and enhancing the quality of instructions and responses in the data.
arXiv Detail & Related papers (2023-10-18T05:13:47Z) - From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning [52.257422715393574]
We introduce a self-guided methodology for Large Language Models (LLMs) to autonomously discern and select cherry samples from open-source datasets.
Our key innovation, the Instruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to identify discrepancies between a model's expected responses and its intrinsic generation capability.
arXiv Detail & Related papers (2023-08-23T09:45:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.