Scaling Laws for Autoregressive Generative Modeling
- URL: http://arxiv.org/abs/2010.14701v2
- Date: Fri, 6 Nov 2020 04:16:36 GMT
- Title: Scaling Laws for Autoregressive Generative Modeling
- Authors: Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse,
Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris
Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M.
Ziegler, John Schulman, Dario Amodei, Sam McCandlish
- Abstract summary: We identify empirical scaling laws for the cross-entropy loss in four domains: generative image modeling, video modeling, multimodal image$leftarrow$text models, and mathematical problem solving.
In all cases autoregressive Transformers smoothly improve in performance as model size and compute budgets increase, following a power-law plus constant scaling law.
- Score: 30.051804305320424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We identify empirical scaling laws for the cross-entropy loss in four
domains: generative image modeling, video modeling, multimodal
image$\leftrightarrow$text models, and mathematical problem solving. In all
cases autoregressive Transformers smoothly improve in performance as model size
and compute budgets increase, following a power-law plus constant scaling law.
The optimal model size also depends on the compute budget through a power-law,
with exponents that are nearly universal across all data domains.
The cross-entropy loss has an information theoretic interpretation as
$S($True$) + D_{\mathrm{KL}}($True$||$Model$)$, and the empirical scaling laws
suggest a prediction for both the true data distribution's entropy and the KL
divergence between the true and model distributions. With this interpretation,
billion-parameter Transformers are nearly perfect models of the YFCC100M image
distribution downsampled to an $8\times 8$ resolution, and we can forecast the
model size needed to achieve any given reducible loss (ie $D_{\mathrm{KL}}$) in
nats/image for other resolutions.
We find a number of additional scaling laws in specific domains: (a) we
identify a scaling relation for the mutual information between captions and
images in multimodal models, and show how to answer the question "Is a picture
worth a thousand words?"; (b) in the case of mathematical problem solving, we
identify scaling laws for model performance when extrapolating beyond the
training distribution; (c) we finetune generative image models for ImageNet
classification and find smooth scaling of the classification loss and error
rate, even as the generative loss levels off. Taken together, these results
strengthen the case that scaling laws have important implications for neural
network performance, including on downstream tasks.
Related papers
- Scaling Laws in Linear Regression: Compute, Parameters, and Data [86.48154162485712]
We study the theory of scaling laws in an infinite dimensional linear regression setup.
We show that the reducible part of the test error is $Theta(-(a-1) + N-(a-1)/a)$.
Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
arXiv Detail & Related papers (2024-06-12T17:53:29Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - On the Scalability of Diffusion-based Text-to-Image Generation [97.64837704129005]
We study scaling properties of diffusion based text-to-image (T2I) models.
For model scaling, we find the location and amount of cross attention distinguishes the performance of existing UNet designs.
On the data scaling side, we show the quality and diversity of the training set matters more than simply dataset size.
arXiv Detail & Related papers (2024-04-03T17:34:28Z) - Neural Scaling Laws on Graphs [54.435688297561015]
We study neural scaling laws on graphs from both model and data perspectives.
For model scaling, we investigate the phenomenon of scaling law collapse and identify overfitting as the potential reason.
For data scaling, we suggest that the number of graphs can not effectively metric the graph data volume in scaling law since the sizes of different graphs are highly irregular.
arXiv Detail & Related papers (2024-02-03T06:17:21Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - A Solvable Model of Neural Scaling Laws [72.8349503901712]
Large language models with a huge number of parameters, when trained on near internet-sized number of tokens, have been empirically shown to obey neural scaling laws.
We propose a statistical model -- a joint generative data model and random feature model -- that captures this neural scaling phenomenology.
Key findings are the manner in which the power laws that occur in the statistics of natural datasets are extended by nonlinear random feature maps.
arXiv Detail & Related papers (2022-10-30T15:13:18Z) - Scaling Laws for Acoustic Models [7.906034575114518]
Recent work has shown that autoregressive generative models with cross-entropy objective functions exhibit smooth power-law relationships.
We show that acoustic models trained with an auto-predictive coding loss behave as if they are subject to similar scaling laws.
arXiv Detail & Related papers (2021-06-11T18:59:24Z) - Explaining Neural Scaling Laws [17.115592382420626]
Population loss of trained deep neural networks often follows precise power-law scaling relations.
We propose a theory that explains the origins of and connects these scaling laws.
We identify variance-limited and resolution-limited scaling behavior for both dataset and model size.
arXiv Detail & Related papers (2021-02-12T18:57:46Z) - A Neural Scaling Law from the Dimension of the Data Manifold [8.656787568717252]
When data is plentiful, the loss achieved by well-trained neural networks scales as a power-law $L propto N-alpha$ in the number of network parameters $N$.
The scaling law can be explained if neural models are effectively just performing regression on a data manifold of intrinsic dimension $d$.
This simple theory predicts that the scaling exponents $alpha approx 4/d$ for cross-entropy and mean-squared error losses.
arXiv Detail & Related papers (2020-04-22T19:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.