RECOST: External Knowledge Guided Data-efficient Instruction Tuning
- URL: http://arxiv.org/abs/2402.17355v1
- Date: Tue, 27 Feb 2024 09:47:36 GMT
- Title: RECOST: External Knowledge Guided Data-efficient Instruction Tuning
- Authors: Qi Zhang, Yiming Zhang, Haobo Wang, Junbo Zhao
- Abstract summary: We argue that most current data-efficient instruction-tuning methods are highly dependent on the quality of the original instruction-tuning dataset.
We propose a framework dubbed as textbfRECOST, which integrates external-knowledge-base re-ranking and diversity-consistent sampling into a single pipeline.
- Score: 25.985023475991625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the current landscape of large language models (LLMs), the process of
instruction tuning serves as an essential step. Considering the high computing
power overhead, data-efficient instruction tuning was proposed to reduce the
training data size in this process, aiming at selecting high-quality
instructional data. Nevertheless, we argue that most current data-efficient
instruction-tuning methods are highly dependent on the quality of the original
instruction-tuning dataset. When it comes to datasets synthesized by LLMs, a
common scenario in this field, dirty samples will even be selected with a
higher probability than other samples. To address these challenges, we utilized
external knowledge (relevant examples or paragraphs) to evaluate those samples
synthesized by LLMs with an in-context-based relative predictive entropy. Based
on the new metric, we proposed a framework, dubbed as \textbf{RECOST}, which
integrates external-knowledge-base re-ranking and diversity-consistent sampling
into a single pipeline. Through extensive experiments on several synthetic
datasets (Alpaca and Alpaca-gpt4), we demonstrate the effectiveness of our
method and achieve even better results with only \textbf{1\%} of the full
dataset.
Related papers
- Dataset Quantization with Active Learning based Adaptive Sampling [11.157462442942775]
We show that maintaining performance is feasible even with uneven sample distributions.
We propose a novel active learning based adaptive sampling strategy to optimize the sample selection.
Our approach outperforms the state-of-the-art dataset compression methods.
arXiv Detail & Related papers (2024-07-09T23:09:18Z) - Entropy Law: The Story Behind Data Compression and LLM Performance [115.70395740286422]
We find that model performance is negatively correlated to the compression ratio of training data, which usually yields a lower training loss.
Based on the findings of the entropy law, we propose a quite efficient and universal data selection method.
We also present an interesting application of entropy law that can detect potential performance risks at the beginning of model training.
arXiv Detail & Related papers (2024-07-09T08:14:29Z) - Retrieval-Augmented Data Augmentation for Low-Resource Domain Tasks [66.87070857705994]
In low-resource settings, the amount of seed data samples to use for data augmentation is very small.
We propose a novel method that augments training data by incorporating a wealth of examples from other datasets.
This approach can ensure that the generated data is not only relevant but also more diverse than what could be achieved using the limited seed data alone.
arXiv Detail & Related papers (2024-02-21T02:45:46Z) - How to Train Data-Efficient LLMs [56.41105687693619]
We study data-efficient approaches for pre-training language models (LLMs)
We find that Ask-LLM and Density sampling are the best methods in their respective categories.
In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories.
arXiv Detail & Related papers (2024-02-15T02:27:57Z) - Exploring Learning Complexity for Downstream Data Pruning [9.526877053855998]
We propose to treat the learning complexity (LC) as the scoring function for classification and regression tasks.
For the instruction fine-tuning of large language models, our method achieves state-of-the-art performance with stable convergence.
arXiv Detail & Related papers (2024-02-08T02:29:33Z) - Improving Text Embeddings with Large Language Models [59.930513259982725]
We introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps.
We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages.
Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data.
arXiv Detail & Related papers (2023-12-31T02:13:18Z) - One-Shot Learning as Instruction Data Prospector for Large Language Models [108.81681547472138]
textscNuggets uses one-shot learning to select high-quality instruction data from extensive datasets.
We show that instruction tuning with the top 1% of examples curated by textscNuggets substantially outperforms conventional methods employing the entire dataset.
arXiv Detail & Related papers (2023-12-16T03:33:12Z) - Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning [47.02160072880698]
We introduce a self-evolving mechanism that allows the model itself to actively sample subsets that are equally or even more effective.
The key to our data sampling technique lies in the enhancement of diversity in the chosen subsets.
Extensive experiments across three datasets and benchmarks demonstrate the effectiveness of DiverseEvol.
arXiv Detail & Related papers (2023-11-14T14:10:40Z) - Optimal Sample Selection Through Uncertainty Estimation and Its
Application in Deep Learning [22.410220040736235]
We present a theoretically optimal solution for addressing both coreset selection and active learning.
Our proposed method, COPS, is designed to minimize the expected loss of a model trained on subsampled data.
arXiv Detail & Related papers (2023-09-05T14:06:33Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.