Beyond the Ideal: Analyzing the Inexact Muon Update
- URL: http://arxiv.org/abs/2510.19933v1
- Date: Wed, 22 Oct 2025 18:01:07 GMT
- Title: Beyond the Ideal: Analyzing the Inexact Muon Update
- Authors: Egor Shulgin, Sultan AlRashed, Francesco Orabona, Peter Richtárik,
- Abstract summary: We show first analysis of the inexactized update at Muon's core.<n>We reveal a fundamental coupling between this inexactness and the optimal step size and momentum.
- Score: 54.70108543057578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Muon optimizer has rapidly emerged as a powerful, geometry-aware alternative to AdamW, demonstrating strong performance in large-scale training of neural networks. However, a critical theory-practice disconnect exists: Muon's efficiency relies on fast, approximate orthogonalization, yet all prior theoretical work analyzes an idealized, computationally intractable version assuming exact SVD-based updates. This work moves beyond the ideal by providing the first analysis of the inexact orthogonalized update at Muon's core. We develop our analysis within the general framework of Linear Minimization Oracle (LMO)-based optimization, introducing a realistic additive error model to capture the inexactness of practical approximation schemes. Our analysis yields explicit bounds that quantify performance degradation as a function of the LMO inexactness/error. We reveal a fundamental coupling between this inexactness and the optimal step size and momentum: lower oracle precision requires a smaller step size but larger momentum parameter. These findings elevate the approximation procedure (e.g., the number of Newton-Schulz steps) from an implementation detail to a critical parameter that must be co-tuned with the learning schedule. NanoGPT experiments directly confirm the predicted coupling, with optimal learning rates clearly shifting as approximation precision changes.
Related papers
- ECO: Quantized Training without Full-Precision Master Weights [58.97082407934466]
Error-Compensating (ECO) eliminates master weights by applying updates directly to quantized parameters.<n>We show that ECO converges to a constant-radius neighborhood of the optimum, while naive master-weight removal can incur an error that is inversely proportional to the learning rate.
arXiv Detail & Related papers (2026-01-29T18:35:01Z) - Bilevel Learning via Inexact Stochastic Gradient Descent [5.312803257246881]
Bilevel optimization is a central tool in machine learning for high-dimensional hyper tuning.<n>We advance the theory of inexact bilevel optimization.<n>We prove convergence and establish rates under decaying accuracy and step size schedules.
arXiv Detail & Related papers (2025-11-10T07:02:52Z) - A Convergence Analysis of Adaptive Optimizers under Floating-point Quantization [32.97211471008323]
We introduce the first theoretical framework of adaptive convergences, including Adam and Muon, under floating-point quantization of gradients, weights, and states.<n>We show that both algorithms retain convergence rates close to their full-precision counterparts provided mantissa length scales only logarithmically with the number of iterations.<n>Our analysis further reveals that Adam is highly sensitive to and second-moment quantization weights due to its reliance on $beta to 1$, while Muon requires weaker error control and is thus potentially more robust.
arXiv Detail & Related papers (2025-10-24T10:16:23Z) - HALO: Hadamard-Assisted Lower-Precision Optimization for LLMs [45.37278584462772]
We present HALO, a novel quantization-aware training approach for Transformers.<n>Our approach ensures that all large matrix multiplications during the forward and backward passes are executed in lower precision.<n>Applying to LLAMA-family models, HALO achieves near-full-precision-equivalent results during fine-tuning on various tasks.
arXiv Detail & Related papers (2025-01-05T18:41:54Z) - Understanding Optimization in Deep Learning with Central Flows [95.5647720254338]
We develop theory that can describe the dynamics of optimization in a complex regime.<n>Our results suggest that central flows can be a valuable theoretical tool for reasoning about optimization in deep learning.
arXiv Detail & Related papers (2024-10-31T17:58:13Z) - A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Jacobian-Enhanced Neural Networks [0.0]
Jacobian-Enhanced Neural Networks (JENN) are densely connected multi-layer perceptrons.
JENN's main benefit is better accuracy with fewer training points compared to standard neural networks.
arXiv Detail & Related papers (2024-06-13T14:04:34Z) - Critical Bach Size Minimizes Stochastic First-Order Oracle Complexity of
Deep Learning Optimizer using Hyperparameters Close to One [0.0]
We show that deep learnings using small constant learning rates, hyper parameters close to one, and large batch sizes can find the model parameters of deep neural networks that minimize the loss functions.
Results indicate that Adam using a small constant learning rate, hyper parameters close to one, and the critical batch size minimizing SFO complexity has faster convergence than Momentum and gradient descent.
arXiv Detail & Related papers (2022-08-21T06:11:23Z) - Meta Hamiltonian Learning [0.0]
We use a machine learning technique known as meta-learning to learn a more efficient drifting for this task.
We observe that the meta-optimizer outperforms other optimization methods in average loss over test samples.
arXiv Detail & Related papers (2021-04-09T16:01:34Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.