Low-rank Momentum Factorization for Memory Efficient Training
- URL: http://arxiv.org/abs/2507.08091v1
- Date: Thu, 10 Jul 2025 18:04:52 GMT
- Title: Low-rank Momentum Factorization for Memory Efficient Training
- Authors: Pouria Mahdavinia, Mehrdad Mahdavi,
- Abstract summary: Momentum Factorized (MoFaSGD) maintains a dynamically updated low-rank SVD representation of the first-order momentum.<n>We demonstrate MoFaSGD's effectiveness on large language model benchmarks, achieving a competitive trade-off between memory reduction (e.g. LoRA) and performance.
- Score: 13.464518325870444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fine-tuning large foundation models presents significant memory challenges due to stateful optimizers like AdamW, often requiring several times more GPU memory than inference. While memory-efficient methods like parameter-efficient fine-tuning (e.g., LoRA) and optimizer state compression exist, recent approaches like GaLore bridge these by using low-rank gradient projections and subspace moment accumulation. However, such methods may struggle with fixed subspaces or computationally costly offline resampling (e.g., requiring full-matrix SVDs). We propose Momentum Factorized SGD (MoFaSGD), which maintains a dynamically updated low-rank SVD representation of the first-order momentum, closely approximating its full-rank counterpart throughout training. This factorization enables a memory-efficient fine-tuning method that adaptively updates the optimization subspace at each iteration. Crucially, MoFaSGD leverages the computed low-rank momentum factors to perform efficient spectrally normalized updates, offering an alternative to subspace moment accumulation. We establish theoretical convergence guarantees for MoFaSGD, proving it achieves an optimal rate for non-convex stochastic optimization under standard assumptions. Empirically, we demonstrate MoFaSGD's effectiveness on large language model alignment benchmarks, achieving a competitive trade-off between memory reduction (comparable to LoRA) and performance compared to state-of-the-art low-rank optimization methods. Our implementation is available at https://github.com/pmahdavi/MoFaSGD.
Related papers
- A Minimalist Optimizer Design for LLM Pretraining [31.996047271119156]
Training large language models typically relies on adaptives such as Adam.<n>Recent works such as GaLore Fira, and APOLLO have proposed state-compressed variants to reduce memory consumption.<n>In this work, we investigate what is the minimal amount of state that is truly necessary to retain state-of-the-art performance in LLM pretraining.
arXiv Detail & Related papers (2025-06-20T00:10:35Z) - MLorc: Momentum Low-rank Compression for Large Language Model Adaptation [18.63642841688227]
We propose a memory-efficient training paradigm called Momentum Low-rank compression (MLorc)<n>By directly compressing and reconstructing momentum rather than gradients, MLorc avoids imposing a fixed-rank constraint on weight update matrices.<n> Empirically, MLorc consistently outperforms other memory-efficient training methods, matches or even exceeds the performance of full fine-tuning.
arXiv Detail & Related papers (2025-06-02T17:21:10Z) - COSMOS: A Hybrid Adaptive Optimizer for Memory-Efficient Training of LLMs [81.01082659623552]
Large Language Models (LLMs) have demonstrated remarkable success across various domains.<n>Their optimization remains a significant challenge due to the complex and high-dimensional loss landscapes they inhabit.
arXiv Detail & Related papers (2025-02-24T18:42:19Z) - A Memory Efficient Randomized Subspace Optimization Method for Training Large Language Models [22.725326215887435]
We introduce a Randomized Subspace Optimization framework for pre-training and fine-tuning Large Language Models.<n>Our approach decomposes the high-dimensional training problem into a series of lower-dimensional subproblems.<n>This structured reduction in dimensionality allows our method to simultaneously reduce memory usage for both activations and states.
arXiv Detail & Related papers (2025-02-11T03:32:10Z) - I3S: Importance Sampling Subspace Selection for Low-Rank Optimization in LLM Pretraining [50.89661053183944]
Low-rank optimization has emerged as a promising approach to enabling memory-efficient training of large language models (LLMs)<n>Existing low-rank optimization methods typically project gradients onto a low-rank subspace, reducing the memory cost of storing states.<n>We propose importance sampling subspace selection (I3S) for low-rank optimization, which theoretically offers a comparable convergence rate to the dominant subspace approach.
arXiv Detail & Related papers (2025-02-09T06:30:19Z) - Sparse Gradient Compression for Fine-Tuning Large Language Models [58.44973963468691]
Fine-tuning large language models (LLMs) for downstream tasks has become increasingly crucial due to their widespread use and the growing availability of open-source models.<n>High memory costs associated with fine-tuning remain a significant challenge, especially as models increase in size.<n>We propose sparse compression gradient (SGC) to address these limitations.
arXiv Detail & Related papers (2025-02-01T04:18:28Z) - FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training [51.39495282347475]
We introduce $textttFRUGAL$ ($textbfF$ull-$textbfR$ank $textbfU$pdates with $textbfG$r$textbfA$dient sp$textbfL$itting, a new memory-efficient optimization framework.<n>Our framework can be integrated with various low-rank update selection techniques, including GaLore and BAdam.
arXiv Detail & Related papers (2024-11-12T14:41:07Z) - Zeroth-Order Fine-Tuning of LLMs in Random Subspaces [66.27334633749734]
As language models grow in size, memory demands for backpropagation increase.
Zeroth-order (ZOZO) optimization methods offer a memory-efficient alternative.
We show that SubZero enhances fine-tuning and achieves faster results compared to standard ZOZO approaches.
arXiv Detail & Related papers (2024-10-11T17:01:43Z) - Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures [21.18741772731095]
Zeroth-order (ZO) algorithms offer a promising alternative by approximating gradients using finite differences of function values.
Existing ZO methods struggle to capture the low-rank gradient structure common in LLM fine-tuning, leading to suboptimal performance.
This paper proposes a low-rank ZO algorithm (LOZO) that effectively captures this structure in LLMs.
arXiv Detail & Related papers (2024-10-10T08:10:53Z) - AdaLomo: Low-memory Optimization with Adaptive Learning Rate [59.64965955386855]
We introduce low-memory optimization with adaptive learning rate (AdaLomo) for large language models.
AdaLomo results on par with AdamW, while significantly reducing memory requirements, thereby lowering the hardware barrier to training large language models.
arXiv Detail & Related papers (2023-10-16T09:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.