Optimal Scaling Needs Optimal Norm
- URL: http://arxiv.org/abs/2510.03871v1
- Date: Sat, 04 Oct 2025 16:48:36 GMT
- Title: Optimal Scaling Needs Optimal Norm
- Authors: Oleg Filatov, Jiangtao Wang, Jan Ebert, Stefan Kesselheim,
- Abstract summary: Joint optimal scaling across model and dataset sizes is governed by a single invariant.<n>Across models with up to 1.3B parameters trained on up to 138B tokens, the optimal learning rate/batch size pair $(etaast, Bast)$ consistently has the same operator norm value.
- Score: 1.8180584498244492
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent progress in optimal hyperparameter transfer under model and dataset scaling, no unifying explanatory principle has been established. Using the Scion optimizer, we discover that joint optimal scaling across model and dataset sizes is governed by a single invariant: the operator norm of the output layer. Across models with up to 1.3B parameters trained on up to 138B tokens, the optimal learning rate/batch size pair $(\eta^{\ast}, B^{\ast})$ consistently has the same operator norm value - a phenomenon we term norm transfer. This constant norm condition is necessary but not sufficient: while for each dataset size, multiple $(\eta, B)$ reach the optimal norm, only a unique $(\eta^{\ast}, B^{\ast})$ achieves the best loss. As a sufficient condition, we provide the first measurement of $(\eta^{\ast}, B^{\ast})$ scaling with dataset size for Scion, and find that the scaling rules are consistent with those of the Adam optimizer. Tuning per-layer-group learning rates also improves model performance, with the output layer being the most sensitive and hidden layers benefiting from lower learning rates. We provide practical insights on norm-guided optimal scaling and release our Distributed Scion (Disco) implementation with logs from over two thousand runs to support research on LLM training dynamics at scale.
Related papers
- Extending $μ$P: Spectral Conditions for Feature Learning Across Optimizers [3.5708391029226885]
We propose a novel framework to derive $$P for a broader class of derivations, including AdamW, AD, LAMB, Sophia, Shampoo and Muon.<n>We implement our $$Ps on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width.
arXiv Detail & Related papers (2026-02-24T14:17:51Z) - Hyperparameter Transfer Enables Consistent Gains of Matrix-Preconditioned Optimizers Across Scales [55.91454326946738]
We study how the optimal learning rate and weight decay should scale with model width and depth for a wide range of languages.<n>We find that scaling the learning rate according to $$P improves transfer, but can still suffer from significant finite-width deviations.<n>For compute-optimal scaling, we find scaling independent weight decay as $1/mathrmwidth$ is nearly optimal across languages.
arXiv Detail & Related papers (2025-12-05T11:03:41Z) - Robust Layerwise Scaling Rules by Proper Weight Decay Tuning [50.11170157029911]
In modern scale-invariant architectures, training quickly enters an degrading-governed steady state.<n>We introduce a weight-decay scaling rule for AdamW that preserves sublayer gain across widths.<n>Our results extend $mu$P beyond the near-init regime by explicitly controlling the steady-state scales set by parameters.
arXiv Detail & Related papers (2025-10-17T02:58:35Z) - Fantastic Pretraining Optimizers and Where to Find Them [59.56075036649332]
AdamW has long been the dominant gradients in language model pretraining.<n>Speedup of matrix-based matrices is inversely proportional to model scale.
arXiv Detail & Related papers (2025-09-02T07:43:22Z) - Leveraging Coordinate Momentum in SignSGD and Muon: Memory-Optimized Zero-Order [39.25335214877435]
Fine-tuning Large Language Models (LLMs) is essential for adapting pre-trained models to downstream tasks.<n>Traditional first-order algorithms incur prohibitive memory and computational costs that scale poorly with model size.<n>We propose zero-order (ZO) optimization methods as a memory- and compute-efficient alternative.
arXiv Detail & Related papers (2025-06-04T20:27:17Z) - Predictable Scale: Part I, Step Law -- Optimal Hyperparameter Scaling Law in Large Language Model Pretraining [59.369484219304866]
We conduct an unprecedented empirical investigation training over 3,700 Large Language Models (LLMs) from scratch across 100 trillion tokens.<n>We establish a universal Scaling Law for hyperparameter optimization in LLM Pre-training, called Step Law.<n>Our estimated optima deviates from the global best performance found via exhaustive search by merely 0.094% on the test set.
arXiv Detail & Related papers (2025-03-06T18:58:29Z) - Lean and Mean Adaptive Optimization via Subset-Norm and Subspace-Momentum with Convergence Guarantees [5.399838579600896]
We introduce two complementary complement techniques for efficient optimization that reduce memory requirements while accelerating training of large-scale neural networks.<n>The first technique, Subset-m step size, generalizes AdaGrad-Norm and AdaGrad(-Norm) through step-size sharing.<n>The second technique, Subspace-Momentum, reduces the momentum state's memory footprint by momentum to a low-dimensional subspace.
arXiv Detail & Related papers (2024-11-11T16:48:07Z) - Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit [1.8337746049048673]
We show an intricate dependence of optimal $eta$ scaling on the pretraining token budget $T$, $B$ and its relation to the critical batch size $B_mathrmcrit$.<n>Surprisingly, our results demonstrate that the observed optimal $eta$ and $B$ dynamics are preserved with $mu$P model scaling.
arXiv Detail & Related papers (2024-10-08T09:06:34Z) - Transfer Q Star: Principled Decoding for LLM Alignment [105.89114186982972]
Transfer $Q*$ estimates the optimal value function for a target reward $r$ through a baseline model.
Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods.
arXiv Detail & Related papers (2024-05-30T21:36:12Z) - Scaling Sparse Fine-Tuning to Large Language Models [67.59697720719672]
Large Language Models (LLMs) are difficult to fully fine-tune due to their sheer number of parameters.
We propose SpIEL, a novel sparse finetuning method which maintains an array of parameter indices and the deltas of these parameters relative to their pretrained values.
We show that SpIEL is superior to popular parameter-efficient fine-tuning methods like LoRA in terms of performance and comparable in terms of run time.
arXiv Detail & Related papers (2024-01-29T18:43:49Z) - Human-in-the-loop: Provably Efficient Preference-based Reinforcement
Learning with General Function Approximation [107.54516740713969]
We study human-in-the-loop reinforcement learning (RL) with trajectory preferences.
Instead of receiving a numeric reward at each step, the agent only receives preferences over trajectory pairs from a human overseer.
We propose the first optimistic model-based algorithm for PbRL with general function approximation.
arXiv Detail & Related papers (2022-05-23T09:03:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.