AsyncSpade: Efficient Test-Time Scaling with Asynchronous Sparse Decoding
- URL: http://arxiv.org/abs/2510.07486v1
- Date: Wed, 08 Oct 2025 19:36:11 GMT
- Title: AsyncSpade: Efficient Test-Time Scaling with Asynchronous Sparse Decoding
- Authors: Shuqing Luo, Yilin Guan, Pingzhi Li, Hanrui Wang, Tianlong Chen,
- Abstract summary: Test-time scaling (TTS) boosts LLM reasoning via long chain-of-thought (CoT)<n> KV-cache growth amplifies the memory-bound bottleneck of LLM decoding.<n>We propose AsyncSpade, an asynchronous framework for efficient TTS built on two core components.
- Score: 35.10915929939651
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Test-time scaling (TTS) boosts LLM reasoning via long chain-of-thought (CoT), but the linear KV-cache growth amplifies the memory-bound bottleneck of LLM decoding. Query-aware page-level sparse decoding can achieve state-of-the-art performance under constrained FLOPs budgets, but is limited by both sequential-dependent page filtering and coarse-grained token selection, hampering serving efficiency and model performance on TTS tasks under high concurrency and long CoT scenarios (consuming even higher runtime than the forward pipeline itself). In this paper, we first find that the current-step query state can be accurately approximated in a unified manner from a short window of recent queries, enabling training-free query-aware sparsity without waiting in the decoding loop. We propose AsyncSpade, an asynchronous framework for efficient TTS built on two core components: (1) a novel light-weight temporal-regressive module that predicts the next-token query state; (2) an asynchronous and disaggregated framework that decouples the KV cache filtering from the auto-regressive decoding loop, overlapping the token-level KV selection with the forward inference computation through asynchronism. To our knowledge, AsyncSpade is the first to eliminate the sequential dependence without sacrificing model performance. We validate the effectiveness of AsyncSpade on common LLM serving setups with an A100 node, where AsyncSpade fully overlaps KV-cache operations with the inference pipeline, achieving theoretical optimal time-per-output-token (TPOT). Specifically, AsyncSpade delivers over 20% reduction on TPOT compared to SoTA baseline (i.e. Quest) and at least 50% TPOT reduction compared to full attention on Qwen3-8B and Qwen3-32B models, while matching or surpassing their accuracy on various TTS benchmarks (AIME-24/25, GPQA-Diamond, MATH-500).
Related papers
- Divide-and-Conquer CoT: RL for Reducing Latency via Parallel Reasoning [18.5812457692667]
We propose to train Divide-and-Conquer CoT (DC-CoT) to reduce the latency.<n>DC-CoT can act as a director that identifies distinct subtasks that can be performed in parallel in its reasoning process, and then spawns workers to execute the subtasks.<n>Our goal is to achieve high accuracy, with a low longest path length, which is a theoretical measure of the latency needed for the response.
arXiv Detail & Related papers (2026-01-30T14:37:07Z) - AsyncHZP: Hierarchical ZeRO Parallelism with Asynchronous Scheduling for Scalable LLM Training [4.643969942380424]
We propose a novel asynchronous variant of ZeRO to achieve superior performance while maintaining simplicity and memory efficiency.<n>Unlike traditional ZeRO, which employs over-fine-grained sharding that can lead to inefficient communication, AsyncHZP adaptively reshards parameters, gradients, and states across different replica groups.<n>AsyncHZP consistently outperforms classic ND parallelism, achieving state-of-the-art performance without complex strategic tuning.
arXiv Detail & Related papers (2025-10-23T01:29:35Z) - dParallel: Learnable Parallel Decoding for dLLMs [77.24184219948337]
Diffusion large language models (dLLMs) offer parallel token prediction and lower inference latency.<n>Existing open-source models still require nearly token-length decoding steps to ensure performance.<n>We introduce dParallel, a simple and effective method that unlocks the inherent parallelism of dLLMs for fast sampling.
arXiv Detail & Related papers (2025-09-30T16:32:52Z) - Learning to Parallel: Accelerating Diffusion Large Language Models via Learnable Parallel Decoding [21.609237262034636]
Autoregressive decoding in large language models (LLMs) requires $mathcalO(n)$ sequential steps for $n$ tokens.<n>We propose Learning to Parallel Decode (Learn2PD), a framework that trains a lightweight and adaptive filter model to predict, for each token position, whether the current prediction matches the final output.<n>This learned filter approximates an oracle parallel decoding strategy that unmasks tokens only when correctly predicted.
arXiv Detail & Related papers (2025-09-29T17:59:54Z) - ATTS: Asynchronous Test-Time Scaling via Conformal Prediction [112.54016379556073]
Large language models (LLMs) benefit from test-time scaling but are often hampered by high inference latency.<n>We introduce ATTS (Asynchronous Test-Time Scaling), a statistically guaranteed adaptive scaling framework.<n>We show that ATTS delivers up to 56.7x speedup in test-time scaling and a 4.14x throughput improvement.
arXiv Detail & Related papers (2025-09-18T16:55:09Z) - Faster and Better LLMs via Latency-Aware Test-Time Scaling [47.3923926808606]
Test-Time Scaling (TTS) has proven effective in improving the performance of Large Language Models (LLMs) during inference.<n>Existing research has overlooked the efficiency of TTS from a latency-sensitive perspective.<n>We demonstrate that a compute-optimal TTS does not always result in the lowest latency in scenarios where latency is critical.
arXiv Detail & Related papers (2025-05-26T07:51:30Z) - Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory Constraints [14.341123057506827]
Large Language Models (LLMs) are indispensable in today's applications, but their inference procedure demands significant computational resources.<n>This paper formulates LLM inference optimization as a multi-stage online scheduling problem.<n>We develop a fluid dynamics approximation to provide a tractable benchmark that guides algorithm design.
arXiv Detail & Related papers (2025-04-15T16:00:21Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - PecSched: Preemptive and Efficient Cluster Scheduling for LLM Inference [11.194752361478567]
Existing cluster-level LLM scheduling strategies primarily target short-input requests with lengths below 2K.<n>We propose PecSched, a preemptive and efficient cluster-level LLM inference scheduler.<n>We show that PecSched reduces the 99th percentile queueing delay of short-input requests by up to 92% and improves their throughput by up to 595%.
arXiv Detail & Related papers (2024-09-23T15:16:29Z) - Fast Chain-of-Thought: A Glance of Future from Parallel Decoding Leads to Answers Faster [61.83949316226113]
FastCoT is a model-agnostic framework based on parallel decoding.
We show that FastCoT saves inference time by nearly 20% with only a negligible performance drop compared to the regular approach.
arXiv Detail & Related papers (2023-11-14T15:56:18Z) - Decoder Tuning: Efficient Language Understanding as Decoding [84.68266271483022]
We present Decoder Tuning (DecT), which in contrast optimize task-specific decoder networks on the output side.
By gradient-based optimization, DecT can be trained within several seconds and requires only one P query per sample.
We conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a $200times$ speed-up.
arXiv Detail & Related papers (2022-12-16T11:15:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.