Not-a-Bandit: Provably No-Regret Drafter Selection in Speculative Decoding for LLMs
- URL: http://arxiv.org/abs/2510.20064v1
- Date: Wed, 22 Oct 2025 22:32:26 GMT
- Title: Not-a-Bandit: Provably No-Regret Drafter Selection in Speculative Decoding for LLMs
- Authors: Hongyi Liu, Jiaji Huang, Zhen Jia, Youngsuk Park, Yu-Xiang Wang,
- Abstract summary: We show that we can accurately evaluate all draft models, instead of only the chosen model without incurring additional queries to the target model.<n>We design system-efficient versions of online learners and demonstrate that the overhead in computation and latency can be substantially reduced.
- Score: 26.796951335634542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speculative decoding is widely used in accelerating large language model (LLM) inference. In this work, we focus on the online draft model selection problem in speculative decoding. We design an algorithm that provably competes with the best draft model in hindsight for each query in terms of either the token acceptance probability or expected acceptance length. In particular, we show that we can accurately evaluate all draft models, instead of only the chosen model without incurring additional queries to the target model, which allows us to improve exponentially over the existing bandit-based approach as the number of draft models increases. Our approach is generically applicable with any speculative decoding methods (single draft, multi-drafts and draft-trees). Moreover, we design system-efficient versions of online learners and demonstrate that the overhead in computation and latency can be substantially reduced. We conduct extensive experiments on open-source LLMs and diverse datasets, demonstrating that our methods substantially outperform the state-of-the-art EAGLE3 and the BanditSpec baseline in a variety of domains where specialized domain-expert drafters are available, especially when long reasoning chains are required.
Related papers
- Mamba Drafters for Speculative Decoding [58.080550222549064]
We introduce novel drafters based on Mamba, a state-of-the-art state space model (SSM)<n>By leveraging the linear structure of SSMs, our approach avoids the quadratic complexity inherent in traditional Transformer-based methods.<n>We further enhance efficiency with a novel test-time tree search algorithm for generating high-quality draft candidates.
arXiv Detail & Related papers (2025-06-01T22:52:47Z) - TETRIS: Optimal Draft Token Selection for Batch Speculative Decoding [76.23719557942917]
TETRIS actively selects the most promising draft tokens (for every request in a batch) to be accepted when verified in parallel.<n>We show theoretically and empirically that TETRIS outperforms baseline speculative decoding and existing methods that dynamically select draft tokens.
arXiv Detail & Related papers (2025-02-21T04:19:24Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models [28.62382804829694]
Large language models (LLMs) remain prohibitive to use under resource constraints.<n>High latency associated with auto-regressive generation renders large LLMs dependent on advanced computing infrastructure.<n>assisted decoding has helped alleviate this, but remains dependent on alignment between the two models.
arXiv Detail & Related papers (2024-08-16T01:12:21Z) - Graph-Structured Speculative Decoding [52.94367724136063]
Speculative decoding has emerged as a promising technique to accelerate the inference of Large Language Models.
We introduce an innovative approach utilizing a directed acyclic graph (DAG) to manage the drafted hypotheses.
We observe a remarkable speedup of 1.73$times$ to 1.96$times$, significantly surpassing standard speculative decoding.
arXiv Detail & Related papers (2024-07-23T06:21:24Z) - S2D: Sorted Speculative Decoding For More Efficient Deployment of Nested Large Language Models [32.68002253527712]
We introduce a novel multi-target scenario for the deployment of draft models for faster inference.
We present a novel, more efficient sorted speculative decoding mechanism that outperforms regular baselines in multi-target settings.
arXiv Detail & Related papers (2024-07-02T05:14:15Z) - Adaptive Draft-Verification for Efficient Large Language Model Decoding [24.347886232342862]
Large language model (LLM) decoding involves generating a sequence of tokens based on a given context.
The typical autoregressive decoding method requires a separate forward pass through the model for each token generated.
We introduce ADED, which accelerates LLM decoding without requiring fine-tuning.
arXiv Detail & Related papers (2024-06-27T22:20:39Z) - Diversified Batch Selection for Training Acceleration [68.67164304377732]
A prevalent research line, known as online batch selection, explores selecting informative subsets during the training process.
vanilla reference-model-free methods involve independently scoring and selecting data in a sample-wise manner.
We propose Diversified Batch Selection (DivBS), which is reference-model-free and can efficiently select diverse and representative samples.
arXiv Detail & Related papers (2024-06-07T12:12:20Z) - Decoding Speculative Decoding [4.56754610152086]
Speculative Decoding is a technique to speed up inference for Large Language Models without sacrificing quality.<n>We study over 350 experiments with LLaMA-65B and OPT-66B using speculative decoding.<n>Our newly designed draft model can provide 111% higher throughput than existing draft models.
arXiv Detail & Related papers (2024-02-02T16:15:24Z) - Multi-Candidate Speculative Decoding [82.05519287513444]
Large language models have shown impressive capabilities across a variety of NLP tasks, yet their generating text autoregressively is time-consuming.
One way to speed them up is speculative decoding, which generates candidate segments from a fast draft model that is then verified in parallel by the target model.
This paper proposes sampling multiple candidates from a draft model and then organising them in batches for verification.
We design algorithms for efficient multi-candidate verification while maintaining the distribution of the target model.
arXiv Detail & Related papers (2024-01-12T17:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.