The Synergy of Speculative Decoding and Batching in Serving Large
Language Models
- URL: http://arxiv.org/abs/2310.18813v1
- Date: Sat, 28 Oct 2023 20:36:36 GMT
- Title: The Synergy of Speculative Decoding and Batching in Serving Large
Language Models
- Authors: Qidong Su, Christina Giannoula, Gennady Pekhimenko
- Abstract summary: We propose a new speculative decoding strategy that chooses the optimal speculation length for different batch sizes.
Our evaluations show that our proposed method can achieve equal or better performance than the state-of-the-art speculation decoding schemes with fixed speculation length.
- Score: 3.3849225405083336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) like GPT are state-of-the-art text generation
models that provide significant assistance in daily routines. However, LLM
execution is inherently sequential, since they only produce one token at a
time, thus incurring low hardware utilization on modern GPUs. Batching and
speculative decoding are two techniques to improve GPU hardware utilization in
LLM inference. To study their synergy, we implement a prototype implementation
and perform an extensive characterization analysis on various LLM models and
GPU architectures. We observe that the optimal speculation length depends on
the batch size used. We analyze the key observation and build a quantitative
model to explain it. Based on our analysis, we propose a new adaptive
speculative decoding strategy that chooses the optimal speculation length for
different batch sizes. Our evaluations show that our proposed method can
achieve equal or better performance than the state-of-the-art speculation
decoding schemes with fixed speculation length.
Related papers
- ParallelSpec: Parallel Drafter for Efficient Speculative Decoding [62.68430939686566]
We present ParallelSpec, an alternative to auto-regressive drafting strategies in state-of-the-art speculative decoding approaches.
In contrast to auto-regressive drafting in the speculative stage, we train a parallel drafter to serve as an efficient speculative model.
arXiv Detail & Related papers (2024-10-08T01:05:08Z) - Graph-Structured Speculative Decoding [52.94367724136063]
Speculative decoding has emerged as a promising technique to accelerate the inference of Large Language Models.
We introduce an innovative approach utilizing a directed acyclic graph (DAG) to manage the drafted hypotheses.
We observe a remarkable speedup of 1.73$times$ to 1.96$times$, significantly surpassing standard speculative decoding.
arXiv Detail & Related papers (2024-07-23T06:21:24Z) - Beyond the Speculative Game: A Survey of Speculative Execution in Large Language Models [9.121458241884444]
Speculative execution is introduced to LLM decoding in a textitdraft-then-verify style.
As the costly inference is parallelized, decoding speed can be significantly boosted.
We present the first survey paper that reviews and unifies literature of speculative execution in LLMs.
arXiv Detail & Related papers (2024-04-23T10:25:45Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - Chimera: A Lossless Decoding Method for Accelerating Large Language Models Inference by Fusing all Tokens [15.566726645722657]
We propose a novel framework specifically designed for speculative sampling.
Within this framework, we introduce a lightweight draft model that effectively utilizes previously generated tokens to predict subsequent words.
We demonstrate impressive results, achieving an average latency speedup ratio of 2.7x compared to the vanilla auto-regressive decoding approach.
arXiv Detail & Related papers (2024-02-24T08:10:39Z) - A Thorough Examination of Decoding Methods in the Era of LLMs [72.65956436513241]
Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers.
This paper provides a comprehensive and multifaceted analysis of various decoding methods within the context of large language models.
Our findings reveal that decoding method performance is notably task-dependent and influenced by factors such as alignment, model size, and quantization.
arXiv Detail & Related papers (2024-02-10T11:14:53Z) - Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding [46.485363806259265]
Speculative Decoding has emerged as a novel decoding paradigm for Large Language Models (LLMs) inference.
In each decoding step, this method first drafts several future tokens efficiently and then verifies them in parallel.
This paper presents a comprehensive overview and analysis of this promising decoding paradigm.
arXiv Detail & Related papers (2024-01-15T17:26:50Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - QIGen: Generating Efficient Kernels for Quantized Inference on Large
Language Models [22.055655390093722]
We present an automatic code generation approach for supporting quantized generative inference on LLMs such as LLaMA or OPT on off-the-shelf CPUs.
Results on CPU-based inference for LLaMA models show that our approach can lead to high performance and high accuracy, comparing favorably to the best existing open-source solution.
arXiv Detail & Related papers (2023-07-07T17:46:08Z) - Inference with Reference: Lossless Acceleration of Large Language Models [97.04200102556551]
LLMA is an accelerator to speed up Large Language Model (LLM) inference with references.
It is motivated by the observation that there are abundant identical text spans between the decoding result by an LLM and the reference that is available in many real world scenarios.
arXiv Detail & Related papers (2023-04-10T09:55:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.