Is the Number of Trainable Parameters All That Actually Matters?
- URL: http://arxiv.org/abs/2109.11928v1
- Date: Fri, 24 Sep 2021 12:43:58 GMT
- Title: Is the Number of Trainable Parameters All That Actually Matters?
- Authors: Am\'elie Chatelain and Amine Djeghri and Daniel Hesslow and Julien
Launay and Iacopo Poli
- Abstract summary: We investigate ways to tentatively cheat scaling laws, and train larger models for cheaper.
We find that the scaling relationship between test loss and compute depends only on the actual number of trainable parameters.
- Score: 2.624902795082451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has identified simple empirical scaling laws for language models,
linking compute budget, dataset size, model size, and autoregressive modeling
loss. The validity of these simple power laws across orders of magnitude in
model scale provides compelling evidence that larger models are also more
capable models. However, scaling up models under the constraints of hardware
and infrastructure is no easy feat, and rapidly becomes a hard and expensive
engineering problem. We investigate ways to tentatively cheat scaling laws, and
train larger models for cheaper. We emulate an increase in effective
parameters, using efficient approximations: either by doping the models with
frozen random parameters, or by using fast structured transforms in place of
dense linear layers. We find that the scaling relationship between test loss
and compute depends only on the actual number of trainable parameters; scaling
laws cannot be deceived by spurious parameters.
Related papers
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws [67.46133952358785]
We release the Gemstones: the most comprehensive open-source scaling law dataset to date.
These models have been trained with different learning rates, schedules, and architectural shapes.
Our checkpoints enable more complex studies of scaling, such as a law that predicts language performance as a function of model width and depth.
arXiv Detail & Related papers (2025-02-07T18:09:38Z) - Scaling Inference-Efficient Language Models [3.271571137474847]
We show that model architecture affects inference latency, where models of the same size can have up to 3.5x difference in latency.
We modify the Chinchilla scaling laws to co-optimize the model parameter count, the number of training tokens, and the model architecture.
We release the Morph-1B model, which improves inference latency by 1.8x while maintaining accuracy on downstream tasks compared to open-source models.
arXiv Detail & Related papers (2025-01-30T03:16:44Z) - Warmstarting for Scaling Language Models [47.691182347349894]
Scaling model sizes to scale performance has worked remarkably well for the current large language models paradigm.
High training costs for contemporary scales of data and models result in a lack of thorough understanding of how to tune and arrive at such training setups.
One direction to ameliorate the cost of pretraining large models is to warmstart the large-scale training from smaller models that are cheaper to tune.
arXiv Detail & Related papers (2024-11-11T20:02:29Z) - A Hitchhiker's Guide to Scaling Law Estimation [56.06982415792523]
Scaling laws predict the loss of a target machine learning model by extrapolating from easier-to-train models with fewer parameters or smaller training sets.
We estimate more than 1000 scaling laws, then derive a set of best practices for estimating scaling laws in new model families.
arXiv Detail & Related papers (2024-10-15T17:59:10Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - nanoLM: an Affordable LLM Pre-training Benchmark via Accurate Loss Prediction across Scales [65.01417261415833]
We present an approach to predict the pre-training loss based on our observations that Maximal Update Parametrization (muP) enables accurate fitting of scaling laws.
With around 14% of the one-time pre-training cost, we can accurately forecast the loss for models up to 52B.
Our goal with nanoLM is to empower researchers with limited resources to reach meaningful conclusions on large models.
arXiv Detail & Related papers (2023-04-14T00:45:01Z) - A Solvable Model of Neural Scaling Laws [72.8349503901712]
Large language models with a huge number of parameters, when trained on near internet-sized number of tokens, have been empirically shown to obey neural scaling laws.
We propose a statistical model -- a joint generative data model and random feature model -- that captures this neural scaling phenomenology.
Key findings are the manner in which the power laws that occur in the statistics of natural datasets are extended by nonlinear random feature maps.
arXiv Detail & Related papers (2022-10-30T15:13:18Z) - Scaling Laws Under the Microscope: Predicting Transformer Performance
from Small Scale Experiments [42.793379799720434]
We investigate whether scaling laws can be used to accelerate model development.
We find that scaling laws emerge at finetuning time in some NLP tasks.
For tasks where scaling laws exist, they can be used to predict the performance of larger models.
arXiv Detail & Related papers (2022-02-13T19:13:00Z) - Scaling Laws for Acoustic Models [7.906034575114518]
Recent work has shown that autoregressive generative models with cross-entropy objective functions exhibit smooth power-law relationships.
We show that acoustic models trained with an auto-predictive coding loss behave as if they are subject to similar scaling laws.
arXiv Detail & Related papers (2021-06-11T18:59:24Z) - Train Large, Then Compress: Rethinking Model Size for Efficient Training
and Inference of Transformers [94.43313684188819]
We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute.
We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps.
This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models.
arXiv Detail & Related papers (2020-02-26T21:17:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.