Data Similarity is Not Enough to Explain Language Model Performance
- URL: http://arxiv.org/abs/2311.09006v1
- Date: Wed, 15 Nov 2023 14:48:08 GMT
- Title: Data Similarity is Not Enough to Explain Language Model Performance
- Authors: Gregory Yauney and Emily Reif and David Mimno
- Abstract summary: Similarity measures correlate with language model performance.
Similarity metrics are not correlated with accuracy or even each other.
This suggests that the relationship between pretraining data and downstream tasks is more complex than often assumed.
- Score: 6.364065652816667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models achieve high performance on many but not all downstream
tasks. The interaction between pretraining data and task data is commonly
assumed to determine this variance: a task with data that is more similar to a
model's pretraining data is assumed to be easier for that model. We test
whether distributional and example-specific similarity measures (embedding-,
token- and model-based) correlate with language model performance through a
large-scale comparison of the Pile and C4 pretraining datasets with downstream
benchmarks. Similarity correlates with performance for multilingual datasets,
but in other benchmarks, we surprisingly find that similarity metrics are not
correlated with accuracy or even each other. This suggests that the
relationship between pretraining data and downstream tasks is more complex than
often assumed.
Related papers
- Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals [91.59906995214209]
We propose a new evaluation method, Counterfactual Attentiveness Test (CAT)
CAT uses counterfactuals by replacing part of the input with its counterpart from a different example, expecting an attentive model to change its prediction.
We show that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves.
arXiv Detail & Related papers (2023-11-16T06:27:35Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - CompoundPiece: Evaluating and Improving Decompounding Performance of
Language Models [77.45934004406283]
We systematically study decompounding, the task of splitting compound words into their constituents.
We introduce a dataset of 255k compound and non-compound words across 56 diverse languages obtained from Wiktionary.
We introduce a novel methodology to train dedicated models for decompounding.
arXiv Detail & Related papers (2023-05-23T16:32:27Z) - Uncertainty Estimation for Language Reward Models [5.33024001730262]
Language models can learn a range of capabilities from unsupervised training on text corpora.
It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons.
We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning.
arXiv Detail & Related papers (2022-03-14T20:13:21Z) - Impact of Pretraining Term Frequencies on Few-Shot Reasoning [51.990349528930125]
We investigate how well pretrained language models reason with terms that are less frequent in the pretraining data.
We measure the strength of this correlation for a number of GPT-based language models on various numerical deduction tasks.
Although LMs exhibit strong performance at few-shot numerical reasoning tasks, our results raise the question of how much models actually generalize beyond pretraining data.
arXiv Detail & Related papers (2022-02-15T05:43:54Z) - How much pretraining data do language models need to learn syntax? [12.668478784932878]
Transformers-based pretrained language models achieve outstanding results in many well-known NLU benchmarks.
We study the impact of pretraining data size on the knowledge of the models using RoBERTa.
arXiv Detail & Related papers (2021-09-07T15:51:39Z) - Few-shot learning through contextual data augmentation [74.20290390065475]
Machine translation models need to adapt to new data to maintain their performance over time.
We show that adaptation on the scale of one to five examples is possible.
Our model reports better accuracy scores than a reference system trained with on average 313 parallel examples.
arXiv Detail & Related papers (2021-03-31T09:05:43Z) - Improving Commonsense Causal Reasoning by Adversarial Training and Data
Augmentation [14.92157586545743]
This paper presents a number of techniques for making models more robust in the domain of causal reasoning.
We show a statistically significant improvement on performance and on both datasets, even with only a small number of additionally generated data points.
arXiv Detail & Related papers (2021-01-13T09:55:29Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z) - SimEx: Express Prediction of Inter-dataset Similarity by a Fleet of
Autoencoders [13.55607978839719]
Knowing the similarity between sets of data has a number of positive implications in training an effective model.
We present SimEx, a new method for early prediction of inter-dataset similarity using a set of pretrained autoencoders.
Our method achieves more than 10x speed-up in predicting inter-dataset similarity compared to common similarity-estimating practices.
arXiv Detail & Related papers (2020-01-14T16:52:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.