Relative Scaling Laws for LLMs
- URL: http://arxiv.org/abs/2510.24626v1
- Date: Tue, 28 Oct 2025 16:55:22 GMT
- Title: Relative Scaling Laws for LLMs
- Authors: William Held, David Hall, Percy Liang, Diyi Yang,
- Abstract summary: Scaling laws describe how language models improve with additional data, parameters, and compute.<n>We introduce relative scaling laws, which track how performance gaps between test distributions evolve with scale.<n>These results show that although scaling improves overall performance, it is not a universal equalizer.
- Score: 91.73497548097775
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Scaling laws describe how language models improve with additional data, parameters, and compute. While widely used, they are typically measured on aggregate test sets. Aggregate evaluations yield clean trends but average over heterogeneous subpopulations, obscuring performance disparities. We introduce relative scaling laws, which track how performance gaps between test distributions evolve with scale rather than focusing solely on absolute error. Using 255 decoder-only Transformers trained under matched-compute (IsoFLOP) budgets from $10^{18}$--$10^{20}$ FLOPs on standard pretraining datasets, we find diverse trajectories: academic domains on MMLU converge toward parity; regional English dialects shift depending on population size; and clusters of AI risk behaviours split, with capability- and influence-related risks increasing during pretraining while adversarial risks do not. These results show that although scaling improves overall performance, it is not a universal equalizer. To support further study, we release all model checkpoints from this work to enable practitioners to measure relative alongside traditional scaling laws, in order to better prioritize robustness challenges in light of the bitter lesson.
Related papers
- Semantic Tube Prediction: Beating LLM Data Efficiency with JEPA [50.494504099850325]
We introduce the Geodesic Hypothesis, positing that token sequences trace geodesics on a smooth semantic manifold and are therefore locally linear.<n>We show this constraint improves signal-to-noise ratio, and preserves diversity by preventing collisions during trajectory.<n>We demonstrate that geometric priors can surpass brute-force scaling.
arXiv Detail & Related papers (2026-02-26T04:45:07Z) - Towards Robust Scaling Laws for Optimizers [89.21160945066737]
Empirical scaling laws are widely used to predict loss as model size and training data grow.<n>We show that Chinchilla-style scaling laws emerge naturally as a result of loss decomposition into irreducible, approximation, and optimization errors.
arXiv Detail & Related papers (2026-02-07T21:40:33Z) - Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets [5.8465717270452195]
We show how scaling law derivation can be used for model and dataset comparison.<n>For the first time, full scaling laws are derived for two important language-vision learning procedures, CLIP and MaMMUT.<n>We show that comparison can also be performed when deriving scaling laws with a constant learning rate schedule.
arXiv Detail & Related papers (2025-06-05T03:35:59Z) - Bayesian scaling laws for in-context learning [85.34114399339741]
In-context learning (ICL) is a powerful technique for getting language models to perform complex tasks with no training updates.<n>We show that ICL approximates a Bayesian learner, which gives rise to a novel Bayesian scaling law for ICL.<n>Our scaling law matches existing scaling laws in accuracy while also offering interpretable terms for task priors, learning efficiency, and per-example probabilities.
arXiv Detail & Related papers (2024-10-21T21:45:22Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Temporal Scaling Law for Large Language Models [70.74571133406958]
We propose the novel concept of Temporal Scaling Law, studying how the test loss of an LLM evolves as the training steps scale up.<n>In contrast to modeling the test loss as a whole in a coarse-grained manner, we break it down and dive into the fine-grained test loss of each token position.<n>We derive the much more precise temporal scaling law by studying the temporal patterns of the parameters in the dynamic hyperbolic-law.
arXiv Detail & Related papers (2024-04-27T05:49:11Z) - Scaling Laws for Downstream Task Performance in Machine Translation [27.278023091494507]
We study how the choice of the pretraining data and its size affect downstream performance (translation quality) as judged by metrics such as BLEU and COMET scores.<n>With sufficient alignment, both downstream cross-entropy and translation quality scores improve monotonically with more pretraining data.
arXiv Detail & Related papers (2024-02-06T17:31:20Z) - Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
Two-Stage Approach to Mitigate Social Biases [13.837927115198308]
We propose an adversarial training-inspired two-stage debiasing model using Contrastive learning and Continuous Prompt Augmentation.
Our approach guides the model to achieve stronger debiasing performance by adding difficulty to the training process.
arXiv Detail & Related papers (2023-07-04T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.