GPU Memory Prediction for Multimodal Model Training
- URL: http://arxiv.org/abs/2512.07853v1
- Date: Wed, 26 Nov 2025 06:24:58 GMT
- Title: GPU Memory Prediction for Multimodal Model Training
- Authors: Jinwoo Jeong, Minchul Kang, Younghun Go, Changyong Shin, Hyunho Lee, Junho Yoon, Gyeongsik Yang, Chuck Yoo,
- Abstract summary: We propose a framework that predicts the peak GPU memory usage by analyzing the model architecture and training behavior of multimodal models.<n>Our framework achieves high prediction accuracy of 8.7% average MAPE.
- Score: 12.707615972878472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As deep learning models in agentic AI systems grow in scale and complexity, GPU memory requirements increase and often exceed the available GPU memory capacity, so that out-of-memory (OoM) errors occur. It is well known that OoM interrupts the whole training itself and wastes substantial computational resources. Therefore, to prevent OoM, accurate prediction of GPU memory usage is essential. However, previous studies focus only on unimodal architectures and fail to generalize to multimodal models, even though the multimodal models are a common choice in agentic AI systems. To address this limitation, we propose a framework that predicts the peak GPU memory usage by analyzing the model architecture and training behavior of multimodal models. Specifically, the framework decomposes the multimodal model into its constituent layers and applies factorization to estimate the memory usage of each layer. Our evaluation shows that our framework achieves high prediction accuracy of ~8.7% average MAPE.
Related papers
- MELINOE: Fine-Tuning Enables Memory-Efficient Inference for Mixture-of-Experts Models [13.907916161242794]
Mixture-of-Experts (MoE) model architectures can significantly reduce the number of activated parameters per token.<n>Their large overall parameter counts and model sizes have precluded their widespread usage in resource-constrained settings.<n>We present MELINOE, a method that fine-tunes an MoE model to more strongly prefer activating a smaller number of experts per sequence.
arXiv Detail & Related papers (2026-01-30T14:40:18Z) - xMem: A CPU-Based Approach for Accurate Estimation of GPU Memory in Deep Learning Training Workloads [2.2991119948183525]
estimation of how much GPU memory a job will require is fundamental to enabling advanced scheduling and GPU sharing.<n>We propose xMem, a novel framework that leverages CPU-only dynamic analysis to accurately estimate peak GPU memory requirements.<n>The analysis of 5209 runs, which includes ANOVA and Monte Carlo results, highlights xMem's benefits.
arXiv Detail & Related papers (2025-10-23T23:16:27Z) - Beyond Memorization: Extending Reasoning Depth with Recurrence, Memory and Test-Time Compute Scaling [60.63703438729223]
We show how different architectures and training methods affect model multi-step reasoning capabilities.<n>We confirm that increasing model depth plays a crucial role for sequential computations.
arXiv Detail & Related papers (2025-08-22T18:57:08Z) - LatentLLM: Attention-Aware Joint Tensor Compression [50.33925662486034]
Large language models (LLMs) and large multi-modal models (LMMs) require a massive amount of computational and memory resources.<n>We propose a new framework to convert such LLMs/LMMs into a reduced-dimension latent structure.
arXiv Detail & Related papers (2025-05-23T22:39:54Z) - Quantifying Memory Utilization with Effective State-Size [73.52115209375343]
We develop a measure of textitmemory utilization'<n>This metric is tailored to the fundamental class of systems with textitinput-invariant and textitinput-varying linear operators
arXiv Detail & Related papers (2025-04-28T08:12:30Z) - Mind the Memory Gap: Unveiling GPU Bottlenecks in Large-Batch LLM Inference [4.497936996651617]
Large language models have been widely adopted across different tasks, but their auto-regressive nature often leads to inefficient resource utilization during inference.<n>In this paper, through an in-depth GPU-level analysis, we reveal that large-batch inference remains memory-bound, with most GPU compute capabilities underutilized.
arXiv Detail & Related papers (2025-03-11T11:21:35Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion
Parameter Pretraining [55.16088793437898]
Training extreme-scale models requires enormous amounts of computes and memory footprint.
We propose a simple training strategy called "Pseudo-to-Real" for high-memory-footprint-required large models.
arXiv Detail & Related papers (2021-10-08T04:24:51Z) - Diagonal Memory Optimisation for Machine Learning on Micro-controllers [21.222568055417717]
Micro controllers and low power CPUs are increasingly being used to perform inference with machine learning models.
Small amounts of RAM available on these targets sets limits on size of models which can be executed.
diagonal memory optimisation technique is described and shown to achieve memory savings of up to 34.5% when applied to eleven common models.
arXiv Detail & Related papers (2020-10-04T19:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.