Training Compute-Optimal Large Language Models
- URL: http://arxiv.org/abs/2203.15556v1
- Date: Tue, 29 Mar 2022 13:38:03 GMT
- Title: Training Compute-Optimal Large Language Models
- Authors: Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya,
Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks,
Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican,
George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen
Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, Laurent Sifre
- Abstract summary: We train language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens.
For compute-optimal training, the model size and the number of training tokens should be scaled equally.
chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B)
- Score: 54.00424650998489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the optimal model size and number of tokens for training a
transformer language model under a given compute budget. We find that current
large language models are significantly undertrained, a consequence of the
recent focus on scaling language models whilst keeping the amount of training
data constant. By training over \nummodels language models ranging from 70
million to over 16 billion parameters on 5 to 500 billion tokens, we find that
for compute-optimal training, the model size and the number of training tokens
should be scaled equally: for every doubling of model size the number of
training tokens should also be doubled. We test this hypothesis by training a
predicted compute-optimal model, \chinchilla, that uses the same compute budget
as \gopher but with 70B parameters and 4$\times$ more more data. \chinchilla
uniformly and significantly outperforms \Gopher (280B), GPT-3 (175B),
Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of
downstream evaluation tasks. This also means that \chinchilla uses
substantially less compute for fine-tuning and inference, greatly facilitating
downstream usage. As a highlight, \chinchilla reaches a state-of-the-art
average accuracy of 67.5\% on the MMLU benchmark, greater than a 7\%
improvement over \gopher.
Related papers
- Patch-Level Training for Large Language Models [69.67438563485887]
This paper introduces patch-level training for Large Language Models (LLMs)
During patch-level training, we feed the language model shorter sequences of patches and train it to predict the next patch.
Following this, the model continues token-level training on the remaining training data to align with the inference mode.
arXiv Detail & Related papers (2024-07-17T15:48:39Z) - Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models [62.4691912312317]
Mixture-of-Experts (MoE) language models can reduce computational costs by 2-4$times$ compared to dense models without sacrificing performance.
We propose a hybrid dense training and sparse inference framework for MoE models (DS-MoE) which achieves strong computation and parameter efficiency.
arXiv Detail & Related papers (2024-04-08T14:39:49Z) - Toward Inference-optimal Mixture-of-Expert Large Language Models [55.96674056805708]
We study the scaling law of MoE-based large language models (LLMs)
We find that MoEs with a few (4/8) experts are the most serving efficient solution under the same performance, but costs 2.5-3.5x more in training.
We propose to amend the scaling law of MoE by introducing inference efficiency as another metric besides the validation loss.
arXiv Detail & Related papers (2024-04-03T16:33:42Z) - Language models scale reliably with over-training and on downstream tasks [121.69867718185125]
Scaling laws are useful guides for derisking expensive training runs.
However, there remain gaps between current studies and how language models are trained.
In contrast, scaling laws mostly predict loss on inference, but models are usually compared on downstream task performance.
arXiv Detail & Related papers (2024-03-13T13:54:00Z) - Investigating Pre-trained Language Models on Cross-Domain Datasets, a
Step Closer to General AI [0.8889304968879164]
We investigate the ability of pre-trained language models to generalize to different non-language tasks.
The four pre-trained models that we used, T5, BART, BERT, and GPT-2 achieve outstanding results.
arXiv Detail & Related papers (2023-06-21T11:55:17Z) - Training Trajectories of Language Models Across Scales [99.38721327771208]
Scaling up language models has led to unprecedented performance gains.
How do language models of different sizes learn during pre-training?
Why do larger language models demonstrate more desirable behaviors?
arXiv Detail & Related papers (2022-12-19T19:16:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.