SLA-MORL: SLA-Aware Multi-Objective Reinforcement Learning for HPC Resource Optimization
- URL: http://arxiv.org/abs/2508.03509v1
- Date: Tue, 05 Aug 2025 14:37:24 GMT
- Title: SLA-MORL: SLA-Aware Multi-Objective Reinforcement Learning for HPC Resource Optimization
- Authors: Seraj Al Mahmud Mostafa, Aravind Mohan, Jianwu Wang,
- Abstract summary: We present SLA-MORL, an adaptive multi-objective reinforcement learning framework that intelligently allocates resources based on user-defined preferences.<n>We show that SLA-MORL achieves 67.2% reduction in training time for deadline-critical jobs, 68.8% reduction in costs for budget-constrained workloads, and 73.4% improvement in overall SLA compliance compared to static baselines.
- Score: 0.9026828778470117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic resource allocation for machine learning workloads in cloud environments remains challenging due to competing objectives of minimizing training time and operational costs while meeting Service Level Agreement (SLA) constraints. Traditional approaches employ static resource allocation or single-objective optimization, leading to either SLA violations or resource waste. We present SLA-MORL, an adaptive multi-objective reinforcement learning framework that intelligently allocates GPU and CPU resources based on user-defined preferences (time, cost, or balanced) while ensuring SLA compliance. Our approach introduces two key innovations: (1) intelligent initialization through historical learning or efficient baseline runs that eliminates cold-start problems, reducing initial exploration overhead by 60%, and (2) dynamic weight adaptation that automatically adjusts optimization priorities based on real-time SLA violation severity, creating a self-correcting system. SLA-MORL constructs a 21-dimensional state representation capturing resource utilization, training progress, and SLA compliance, enabling an actor-critic network to make informed allocation decisions across 9 possible actions. Extensive evaluation on 13 diverse ML workloads using production HPC infrastructure demonstrates that SLA-MORL achieves 67.2% reduction in training time for deadline-critical jobs, 68.8% reduction in costs for budget-constrained workloads, and 73.4% improvement in overall SLA compliance compared to static baselines. By addressing both cold-start inefficiency and dynamic adaptation challenges, SLA-MORL provides a practical solution for cloud resource management that balances performance, cost, and reliability in modern ML training environments.
Related papers
- Adaptive Request Scheduling for CodeLLM Serving with SLA Guarantees [6.110847503516972]
Existing Large Language Models (CodeMs) are increasingly integrated into modern software development.<n>Yet, self-hosted environments remain a significant challenge in resource-constrained serving environments.<n>We propose SABER, a dynamic strategy that predicts per-request SLA feasibility and decisions in real time.<n>Our results demonstrate that SLA-aware, adaptive scheduling is key to robust, high-performance CodeLL serving.
arXiv Detail & Related papers (2025-06-24T14:44:33Z) - CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation [17.807249890437767]
We propose CoLA and its memory-efficient implementation, CoLA-M, to replace full-size layers with compute-efficient auto-encoders.<n>Experiments on LLaMA models with 60 million to 7 billion parameters show that CoLA reduces the computing cost by $bf 2pmbtimes$.<n>CoLA-M further squeezes memory cost without sacrificing throughput, offering a pre-training approach with collectively superior parameter, computing, and memory efficiency.
arXiv Detail & Related papers (2025-02-16T01:05:16Z) - Adaptive Resource Allocation Optimization Using Large Language Models in Dynamic Wireless Environments [25.866960634041092]
Current solutions rely on domain-specific architectures or techniques, and a general DL approach for constrained optimization remains undeveloped.<n>We propose a large language model for resource allocation (LLM-RAO) to address the complex resource allocation problem while adhering to constraints.<n>LLM-RAO achieves up to a 40% performance enhancement compared to conventional DL methods and up to an $80$% improvement over analytical approaches.
arXiv Detail & Related papers (2025-02-04T12:56:59Z) - SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training [60.9776082805359]
Large Language Models (LLMs) have demonstrated exceptional performance across diverse tasks, yet their training remains highly resource-intensive and susceptible to training instability.<n>This paper presents a comprehensive investigation into gradient spikes observed during LLM training, revealing their prevalence across multiple architectures and datasets.<n>We propose Spike-Aware Adam with Momentum Reset, a novel designed to counteract gradient spikes through momentum reset and spike-aware clipping.
arXiv Detail & Related papers (2025-01-12T15:21:22Z) - Adaptive Pruning for Large Language Models with Structural Importance Awareness [66.2690963378878]
Large language models (LLMs) have significantly improved language understanding and generation capabilities.<n>LLMs are difficult to deploy on resource-constrained edge devices due to their high computational and storage resource demands.<n>We propose structurally-aware adaptive pruning (SAAP) to significantly reduce the computational and memory costs while maintaining model performance.
arXiv Detail & Related papers (2024-12-19T18:08:04Z) - A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs [74.35290684163718]
A primary challenge in large language model (LLM) development is their onerous pre-training cost.
This paper explores a promising paradigm to improve LLM pre-training efficiency and quality by leveraging a small language model (SLM)
arXiv Detail & Related papers (2024-10-24T14:31:52Z) - Characterization of Large Language Model Development in the Datacenter [55.9909258342639]
Large Language Models (LLMs) have presented impressive performance across several transformative tasks.
However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs.
We present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme.
arXiv Detail & Related papers (2024-03-12T13:31:14Z) - MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT [87.4910758026772]
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development.
This paper explores the "less is more" paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource constrained devices.
arXiv Detail & Related papers (2024-02-26T18:59:03Z) - Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning [50.9692060692705]
This paper introduces $textbfLanguage Models for $textbfMo$tion Control ($textbfLaMo$), a general framework based on Decision Transformers for offline RL.<n>Our framework highlights four crucial components:.<n>Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method,.<n>In particular, our method demonstrates superior performance in scenarios with limited data samples.
arXiv Detail & Related papers (2023-10-31T16:24:17Z) - Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference
Serving Systems [0.0]
InfAdapter proactively selects a set of ML model variants with their resource allocations to meet latency SLO.
It decreases SLO violation and costs up to 65% and 33%, respectively, compared to a popular industry autoscaler.
arXiv Detail & Related papers (2023-04-21T11:19:49Z) - CILP: Co-simulation based Imitation Learner for Dynamic Resource
Provisioning in Cloud Computing Environments [13.864161788250856]
Key challenge for latency-critical tasks is to predict future workload demands to provision proactively.
Existing AI-based solutions tend to not holistically consider all crucial aspects such as provision overheads, heterogeneous VM costs and Quality of Service (QoS) of the cloud system.
We propose a novel method, called CILP, that formulates the VM provisioning problem as two sub-problems of prediction and optimization.
arXiv Detail & Related papers (2023-02-11T09:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.