SLOs-Serve: Optimized Serving of Multi-SLO LLMs
- URL: http://arxiv.org/abs/2504.08784v1
- Date: Sat, 05 Apr 2025 17:41:26 GMT
- Title: SLOs-Serve: Optimized Serving of Multi-SLO LLMs
- Authors: Siyuan Chen, Zhipeng Jia, Samira Khan, Arvind Krishnamurthy, Phillip B. Gibbons,
- Abstract summary: SLOs-Serve is a system designed for serving multi-stage large language model (LLM) requests with application- and stage-specific service level objectives (SLOs)<n>The key idea behind SLOs-Serve is to customize the allocation of tokens to meet these SLO requirements.
- Score: 11.102801440968706
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper introduces SLOs-Serve, a system designed for serving multi-stage large language model (LLM) requests with application- and stage-specific service level objectives (SLOs). The key idea behind SLOs-Serve is to customize the allocation of tokens to meet these SLO requirements. SLOs-Serve uses a multi-SLO dynamic programming-based algorithm to continuously optimize token allocations under SLO constraints by exploring the full design space of chunked prefill and (optional) speculative decoding. Leveraging this resource planning algorithm, SLOs-Serve effectively supports multi-SLOs and multi-replica serving with dynamic request routing while being resilient to bursty arrivals. Our evaluation across 6 LLM application scenarios (including summarization, coding, chatbot, tool calling, and reasoning) demonstrates that SLOs-Serve improves per-GPU serving capacity by 2.2x on average compared to prior state-of-the-art systems.
Related papers
- Tempo: Application-aware LLM Serving with Mixed SLO Requirements [7.290735867969561]
We introduce Tempo, a scheduler designed to maximize service gain across diverse LLM workloads.
Our evaluation shows that Tempo improves end-to-end service gain by up to 8.3$times$ achieves and up to 10.3$times$ SLO goodput compared to state-of-the-art designs.
arXiv Detail & Related papers (2025-04-24T05:55:21Z) - AccelGen: Heterogeneous SLO-Guaranteed High-Throughput LLM Inference Serving for Diverse Applications [8.964981700274059]
We propose AccelGen, a high- throughput inference serving system with heterogeneous SLO guarantees for diverse applications.<n>Trace real experiments demonstrate that AccelGen achieves 1.42-11.21X higher throughput, 1.43-13.71X higher goodput, 37-90% higher SLO attainment, and 1.61-12.22X lower response latency compared to the state-of-the-art approaches.
arXiv Detail & Related papers (2025-03-17T21:47:43Z) - SpecServe: Efficient and SLO-Aware Large Language Model Serving with Adaptive Speculative Decoding [18.45994543035372]
Speculative decoding has emerged as a compelling technique to accelerate Large Language Model inference.<n>Existing speculative decoding solutions often fail to adapt to varying workloads and system environments.<n>We introduce SpecServe, an efficient LLM inference system that dynamically adjusts speculative strategies according to real-time request loads.
arXiv Detail & Related papers (2025-03-07T02:27:51Z) - Autellix: An Efficient Serving Engine for LLM Agents as General Programs [59.673243129044465]
Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs.<n>Existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization.<n>We introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies.
arXiv Detail & Related papers (2025-02-19T18:59:30Z) - AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative Decoding [12.377283389338709]
AdaServe is the first LLM serving system to support SLO customization through fine-grained speculative decoding.<n>AdaServe achieves up to 73% higher SLO attainment and 74% higher goodput compared to state-of-the-art systems.
arXiv Detail & Related papers (2025-01-21T14:15:01Z) - Hierarchical Autoscaling for Large Language Model Serving with Chiron [2.767894999702707]
Large language model (LLM) serving is becoming an increasingly important workload for cloud providers.<n>Previous autoscalers for LLM serving do not consider request SLOs leading to unnecessary scaling and resource under-utilization.<n>We introduce Chiron, an autoscaler that uses the idea of hierarchical backpressure estimated using queue size, utilization, and SLOs.
arXiv Detail & Related papers (2025-01-14T12:57:40Z) - MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation [52.739500459903724]
Large Language Models (LLMs) have demonstrated remarkable planning abilities across various domains, including robotics manipulation and navigation.
We propose a novel multi-agent LLM framework that distributes high-level planning and low-level control code generation across specialized LLM agents.
We evaluate our approach on nine RLBench tasks, including long-horizon tasks, and demonstrate its ability to solve robotics manipulation in a zero-shot setting.
arXiv Detail & Related papers (2024-11-26T17:53:44Z) - Large Language Model as a Catalyst: A Paradigm Shift in Base Station Siting Optimization [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
Our proposed framework incorporates retrieval-augmented generation (RAG) to enhance the system's ability to acquire domain-specific knowledge and generate solutions.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - Queue management for slo-oriented large language model serving [3.0134961904579094]
We propose QLM, a queue management system for large language model (LLM) serving.<n>QLM maintains batch and interactive requests across different models and SLOs in a request queue.<n>It uses a Request Waiting Time (RWT) Estimator that estimates the waiting times for requests in the request queue.
arXiv Detail & Related papers (2024-06-05T21:17:34Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - An Introduction to Bi-level Optimization: Foundations and Applications
in Signal Processing and Machine Learning [46.02026158913706]
Bi-level optimization (BLO) has taken center stage in some exciting developments in the area of signal processing (SP) and machine learning (ML)
BLO is a classical optimization problem that involves two levels of hierarchy (i.e., upper and lower levels)
Prominent applications of BLO range from resource allocation for wireless systems to adversarial machine learning.
arXiv Detail & Related papers (2023-08-01T18:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.