FlexLLM: A System for Co-Serving Large Language Model Inference and
Parameter-Efficient Finetuning
- URL: http://arxiv.org/abs/2402.18789v1
- Date: Thu, 29 Feb 2024 01:33:08 GMT
- Title: FlexLLM: A System for Co-Serving Large Language Model Inference and
Parameter-Efficient Finetuning
- Authors: Xupeng Miao, Gabriele Oliaro, Xinhao Cheng, Mengdi Wu, Colin Unger,
Zhihao Jia
- Abstract summary: Existing systems cannot handle workloads that include a mix of inference and PEFT finetuning requests.
We present FlexLLM, the first system that can serve inference and parameter-efficient finetuning requests in the same iteration.
Compared to existing systems, FlexLLM's co-serving approach reduces the activation GPU memory overhead by up to 8x, and the end-to-end GPU memory requirement of finetuning by up to 36%.
- Score: 9.979010592887096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Parameter-efficient finetuning (PEFT) is a widely used technique to adapt
large language models for different tasks. Service providers typically create
separate systems for users to perform PEFT model finetuning and inference
tasks. This is because existing systems cannot handle workloads that include a
mix of inference and PEFT finetuning requests. As a result, shared GPU
resources are underutilized, leading to inefficiencies. To address this
problem, we present FlexLLM, the first system that can serve inference and
parameter-efficient finetuning requests in the same iteration. Our system
leverages the complementary nature of these two tasks and utilizes shared GPU
resources to run them jointly, using a method called co-serving. To achieve
this, FlexLLM introduces a novel token-level finetuning mechanism, which breaks
down the finetuning computation of a sequence into smaller token-level
computations and uses dependent parallelization and graph pruning, two static
compilation optimizations, to minimize the memory overhead and latency for
co-serving. Compared to existing systems, FlexLLM's co-serving approach reduces
the activation GPU memory overhead by up to 8x, and the end-to-end GPU memory
requirement of finetuning by up to 36% while maintaining a low inference
latency and improving finetuning throughput. For example, under a heavy
inference workload, FlexLLM can still preserve more than 80% of the peak
finetuning throughput, whereas existing systems cannot make any progress with
finetuning. The source code of FlexLLM is publicly available at
https://github.com/flexflow/FlexFlow.
Related papers
- MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter [40.616849959987555]
We introduce a novel mechanism that fine-tunes Large Language Models (LLMs) with adapters of larger size yet memory-efficient.
This is achieved by leveraging the inherent activation sparsity in the Feed-Forward Networks (FFNs) of LLMs.
We employ a Mixture of Experts (MoE)-like architecture to mitigate unnecessary CPU computations and reduce the communication volume between the GPU and CPU.
arXiv Detail & Related papers (2024-06-07T14:49:22Z) - LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism [12.521026493432181]
Existing large language models (LLMs) cannot efficiently serve variable-length requests in different phases.
We propose a new parallelism paradigm, elastic sequence parallelism (ESP), to adapt to the variance between different requests and phases.
LoongServe improves the maximum throughput by up to 3.85$times$ compared to the chunked prefill and 5.81$times$ compared to the prefill-decoding disaggregation.
arXiv Detail & Related papers (2024-04-15T07:45:04Z) - JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning [16.86356520836045]
We introduce a novel framework for PEFT-compatible fine-tuning of Llama-2 models, leveraging distributed training.
Our framework uniquely utilizes JAX's just-in-time (JIT) compilation and tensor-sharding for efficient resource management.
Our experiments show more than 12x improvement in runtime compared to Hugging Face/DeepSpeed implementation with four GPU while consuming less than half the VRAM per GPU.
arXiv Detail & Related papers (2024-03-17T23:02:04Z) - Green AI: A Preliminary Empirical Study on Energy Consumption in DL
Models Across Different Runtime Infrastructures [56.200335252600354]
It is common practice to deploy pre-trained models on environments distinct from their native development settings.
This led to the introduction of interchange formats such as ONNX, which includes its infrastructure, and ONNX, which work as standard formats.
arXiv Detail & Related papers (2024-02-21T09:18:44Z) - FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency
Trade-off in Language Model Inference [57.119047493787185]
This paper shows how to reduce model size by 43.1% and bring $1.25sim1.56times$ wall clock time speedup on different hardware with negligible accuracy drop.
In practice, our method can reduce model size by 43.1% and bring $1.25sim1.56times$ wall clock time speedup on different hardware with negligible accuracy drop.
arXiv Detail & Related papers (2024-01-08T17:29:16Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - FlexGen: High-Throughput Generative Inference of Large Language Models
with a Single GPU [89.2451963569343]
FlexGen is a generation engine for running large language model (LLM) inference on a single commodity GPU.
When running OPT-175B on a single 16GB GPU, FlexGen achieves significantly higher throughput compared to state-of-the-art offloading systems.
On the HELM benchmark, FlexGen can benchmark a 30B model with a 16GB GPU on 7 representative sub-scenarios in 21 hours.
arXiv Detail & Related papers (2023-03-13T05:19:28Z) - Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
Partitioned Edge Learning [73.82875010696849]
Machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models.
This paper focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation.
arXiv Detail & Related papers (2020-03-10T05:52:15Z) - Dynamic Parameter Allocation in Parameter Servers [74.250687861348]
We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse.
We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers.
arXiv Detail & Related papers (2020-02-03T11:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.