To Use or not to Use Muon: How Simplicity Bias in Optimizers Matters
- URL: http://arxiv.org/abs/2603.00742v1
- Date: Sat, 28 Feb 2026 17:37:15 GMT
- Title: To Use or not to Use Muon: How Simplicity Bias in Optimizers Matters
- Authors: Sara Dragutinović, Rajesh Ranganath,
- Abstract summary: Muon has perhaps gained the highest popularity due to its superior training speed.<n>This paper investigates the potential downsides stemming from the mechanism driving this speedup.<n>Muon struggles to uncover common underlying structure across tasks, and is more prone to fitting spurious features.
- Score: 16.624341041698013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For a long period of time, Adam has served as the ubiquitous default choice for training deep neural networks. Recently, many new optimizers have been introduced, out of which Muon has perhaps gained the highest popularity due to its superior training speed. While many papers set out to validate the benefits of Muon, our paper investigates the potential downsides stemming from the mechanism driving this speedup. We explore the biases induced when optimizing with Muon, providing theoretical analysis and its consequences to the learning trajectories and solutions learned. While the theory does provide justification for the benefits Muon brings, it also guides our intuition when coming up with a couple of examples where Muon-optimized models have disadvantages. The core problem we emphasize is that Muon optimization removes a simplicity bias that is naturally preserved by older, more thoroughly studied methods like Stochastic Gradient Descent (SGD). We take first steps toward understanding consequences this may have: Muon might struggle to uncover common underlying structure across tasks, and be more prone to fitting spurious features. More broadly, this paper should serve as a reminder: when developing new optimizers, it is essential to consider the biases they introduce, as these biases can fundamentally change a model's behavior -- for better or for worse.
Related papers
- MuonRec: Shifting the Optimizer Paradigm Beyond Adam in Scalable Generative Recommendation [60.1890607252082]
MuonRec is the first framework that brings the proposed Muon iteration to RecSys training.<n>We develop an open-source training recipe for recommendation models and evaluate it across both traditional sequential recommenders and modern generative recommenders.
arXiv Detail & Related papers (2026-02-28T02:32:44Z) - Muon is Provably Faster with Momentum Variance Reduction [55.388203260208485]
Recent empirical research has demonstrated that deep learnings based on the linear linear oracle (LMO) over specifically chosen Non-Eudean.<n>Adam-type training methods outperform the minimization of large language models.
arXiv Detail & Related papers (2025-12-18T14:38:39Z) - NorMuon: Making Muon more efficient and scalable [71.49702449498085]
We propose NorMuon (Neuron-wise Normalized Muon) as a successor to Adam.<n>We show NorMuon consistently outperforms both Adam and Muon, achieving 21.74% better training efficiency than Adam and 11.31% improvement over Muon on 1.1 B pretraining setting.
arXiv Detail & Related papers (2025-10-07T01:13:41Z) - FedMuon: Federated Learning with Bias-corrected LMO-based Optimization [36.00641661700195]
We study how Muon can be utilized in federated learning.<n>We demonstrate that FedMuon can outperform the state-of-the-art federated learning methods.
arXiv Detail & Related papers (2025-09-30T14:45:12Z) - Muon Outperforms Adam in Tail-End Associative Memory Learning [118.98991042050532]
We show that Muon consistently achieves balanced learning across classes regardless of feature embeddings.<n>Our empirical observations and theoretical analyses reveal Muon's core advantage: its update rule aligns with the outer-product structure of linear associative memories.
arXiv Detail & Related papers (2025-09-30T10:04:08Z) - On the Convergence Analysis of Muon [19.29806555936508]
We present a comprehensive convergence rate analysis of Muon and its comparison with Gradient Descent (GD)<n>Our theoretical results reveal that Muon can benefit from the low-rank and approximate blockwise diagonal structure of Hessian matrices.
arXiv Detail & Related papers (2025-05-29T17:58:01Z) - Muon is Scalable for LLM Training [50.68746986439438]
We introduce Moonlight, a Mixture-of-Expert (MoE) model trained with 5.7T tokens using Muon.<n>Our model improves the current frontier, achieving better performance with much fewer training FLOPs compared to prior models.<n>We open-source our distributed Muon implementation that is memory optimal and communication efficient.
arXiv Detail & Related papers (2025-02-24T09:12:29Z) - Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases [76.9127853906115]
Bridging the gap between diffusion models and human preferences is crucial for their integration into practical generative.
We propose Temporal Diffusion Policy Optimization with critic active neuron Reset (TDPO-R), a policy gradient algorithm that exploits the temporal inductive bias of diffusion models.
Empirical results demonstrate the superior efficacy of our methods in mitigating reward overoptimization.
arXiv Detail & Related papers (2024-02-13T15:55:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.