Beyond Scale: the Diversity Coefficient as a Data Quality Metric
Demonstrates LLMs are Pre-trained on Formally Diverse Data
- URL: http://arxiv.org/abs/2306.13840v2
- Date: Tue, 26 Sep 2023 23:29:05 GMT
- Title: Beyond Scale: the Diversity Coefficient as a Data Quality Metric
Demonstrates LLMs are Pre-trained on Formally Diverse Data
- Authors: Alycia Lee, Brando Miranda, Sudharsan Sundar, Sanmi Koyejo
- Abstract summary: We use the recently proposed Task2Vec diversity coefficient to ground and understand formal aspects of data quality.
Specifically, we measure the diversity coefficient of publicly available pre-training datasets to demonstrate that their formal diversity is high.
We conclude the diversity coefficient is reliable, show it's high for publicly available LLM datasets, and conjecture it can be used to build useful diverse datasets for LLMs.
- Score: 12.76278784443243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current trends to pre-train capable Large Language Models (LLMs) mostly focus
on scaling of model and dataset size. However, the quality of pre-training data
is an important factor for training powerful LLMs, yet it is a nebulous concept
that has not been fully characterized. Therefore, we use the recently proposed
Task2Vec diversity coefficient to ground and understand formal aspects of data
quality, to go beyond scale alone. Specifically, we measure the diversity
coefficient of publicly available pre-training datasets to demonstrate that
their formal diversity is high when compared to theoretical lower and upper
bounds. In addition, to build confidence in the diversity coefficient, we
conduct interpretability experiments and find that the coefficient aligns with
intuitive properties of diversity, e.g., it increases as the number of latent
concepts increases. We conclude the diversity coefficient is reliable, show
it's high for publicly available LLM datasets, and conjecture it can be used to
build useful diverse datasets for LLMs.
Related papers
- On Pretraining Data Diversity for Self-Supervised Learning [57.91495006862553]
We explore the impact of training with more diverse datasets on the performance of self-supervised learning (SSL) under a fixed computational budget.
Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal.
arXiv Detail & Related papers (2024-03-20T17:59:58Z) - Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes [57.62036621319563]
We introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime.
We demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators.
arXiv Detail & Related papers (2023-12-19T12:34:46Z) - How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition [64.86360698067764]
This study focuses on the interplay of data composition between mathematical reasoning, code generation, and general human-aligning abilities during supervised fine-tuning.
Our experiments reveal that distinct capabilities scale differently and larger models generally show superior performance with same amount of data.
Our findings suggest the amount of composition data influences performance more than the composition ratio.
arXiv Detail & Related papers (2023-10-09T07:56:16Z) - From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning [52.257422715393574]
We introduce a self-guided methodology for Large Language Models (LLMs) to autonomously discern and select cherry samples from open-source datasets.
Our key innovation, the Instruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to identify discrepancies between a model's expected responses and its intrinsic generation capability.
arXiv Detail & Related papers (2023-08-23T09:45:29Z) - To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis [50.31589712761807]
Large language models (LLMs) are notoriously token-hungry during pre-training, and high-quality text data on the web is approaching its scaling limit for LLMs.
We investigate the consequences of repeating pre-training data, revealing that the model is susceptible to overfitting.
Second, we examine the key factors contributing to multi-epoch degradation, finding that significant factors include dataset size, model parameters, and training objectives.
arXiv Detail & Related papers (2023-05-22T17:02:15Z) - On the Trade-off of Intra-/Inter-class Diversity for Supervised
Pre-training [72.8087629914444]
We study the impact of the trade-off between the intra-class diversity (the number of samples per class) and the inter-class diversity (the number of classes) of a supervised pre-training dataset.
With the size of the pre-training dataset fixed, the best downstream performance comes with a balance on the intra-/inter-class diversity.
arXiv Detail & Related papers (2023-05-20T16:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.