Efficient Remote Prefix Fetching with GPU-native Media ASICs
- URL: http://arxiv.org/abs/2602.09725v2
- Date: Thu, 12 Feb 2026 03:30:35 GMT
- Title: Efficient Remote Prefix Fetching with GPU-native Media ASICs
- Authors: Liang Mi, Weijun Wang, Jinghan Chen, Ting Cao, Haipeng Dai, Yunxin Liu,
- Abstract summary: Remote KV cache reuse fetches KV cache for identical contexts from remote storage, avoiding recomputation, accelerating inference LLM.<n>Recent studies address this by transmitting KV caches in compressed form, but the associated heavyweight decompression counteracts the KV reuse benefits.<n>We propose an efficient and widely deployable remote KV cache reuse solution that leverages GPU-native video codecs.
- Score: 15.991394335072547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Remote KV cache reuse fetches KV cache for identical contexts from remote storage, avoiding recomputation, accelerating LLM inference. While it excels in high-speed networks, its performance degrades significantly in bandwidth-limited scenarios. Recent studies address this by transmitting KV caches in compressed form, but the associated heavyweight decompression counteracts the KV reuse benefits. In this paper, we propose an efficient and widely deployable remote KV cache reuse solution that leverages GPU-native video codecs. Our system, KVFetcher, enables effective KV cache coding with two techniques. The codec-friendly tensor layout compresses the KV cache in a highly compact video format, enabling fast transmission. The efficient KV fetcher orchestrates the transmission, decoding, and restoration of compressed KV caches in an efficient pipelined manner, eliminating resource contention, masking network fluctuations, and achieving minimum time-to-first-token (TTFT). We prototype KVFetcher on diverse GPUs from high- to low-end. Experiments reveal that it reduces TTFT by up to 3.51 times while maintaining lossless accuracy, compared to SOTA methods.
Related papers
- DeltaKV: Residual-Based KV Cache Compression via Long-Range Similarity [50.52392445266824]
We propose a residual-based KV cache compression framework motivated by long-range inter-token similarity and highly shared latent components in KV representations.<n>Instead of discarding tokens, DeltaKV encodes semantic residuals relative to retrieved historical references, preserving fidelity while substantially reducing storage.<n>Experiments show that DeltaKV reduces KV cache memory to 29% of the original while maintaining near-lossless accuracy on LongBench, SCBench, and AIME.
arXiv Detail & Related papers (2026-02-08T15:14:36Z) - PackKV: Reducing KV Cache Memory Footprint through LLM-Aware Lossy Compression [8.427136461713706]
We present textbfPackKV, a generic and efficient KV cache management framework.<n>PackKV supports both latency-critical and throughput-critical inference scenarios.
arXiv Detail & Related papers (2025-12-30T20:05:32Z) - StreamKV: Streaming Video Question-Answering with Segment-based KV Cache Retrieval and Compression [95.59657871147846]
We propose textbfStreamKV, a framework that seamlessly equips Video-LLMs with advanced KV cache retrieval and compression.<n>Experiments on public StreamingVQA benchmarks demonstrate that StreamKV significantly outperforms existing Online Video-LLMs.
arXiv Detail & Related papers (2025-11-10T16:25:03Z) - KV Cache Transform Coding for Compact Storage in LLM Inference [2.20003167536462]
We present KVTC, a lightweight transform coder that compresses KV caches for compact on- GPU and off- GPU storage.<n>By exploiting redundancies in KV caches, KVTC achieves up to 20$times$ compression while maintaining reasoning and long-context accuracy.<n>We test KVTC with Llama 3, Mistral NeMo, and R1-Qwen 2.5 models across benchmarks including AIME25, LiveCodeBench, GSM8K, MMLU, Qasper, RULER, and MATH-500.
arXiv Detail & Related papers (2025-11-03T18:20:35Z) - KVComp: A High-Performance, LLM-Aware, Lossy Compression Framework for KV Cache [7.019967158501771]
We present KVComp, a generic and efficient KV cache management framework optimized for long-text generation.<n> KVComp employs novel lossy compression techniques specifically designed for KV cache data characteristics.<n>We show that KVComp achieves on average 47% and up to 83% higher memory reduction rate compared to existing methods.
arXiv Detail & Related papers (2025-08-30T18:25:19Z) - ReCalKV: Low-Rank KV Cache Compression via Head Reordering and Offline Calibration [69.57122277845293]
We propose ReCalKV, a post-training low-rank KV cache compression approach with tailored strategies for Keys and Values.<n>For Keys, we propose Similarity aware Recontext (HSR), which clusters structurally similar heads into groups, enabling more accurate low-rank approximation.<n>For Values, we propose Offline Head-wise Value (OVC), which efficiently calibrates the value projection matrix using calibration data without training.
arXiv Detail & Related papers (2025-05-30T08:49:27Z) - R-KV: Redundancy-aware KV Cache Compression for Reasoning Models [77.84539432982307]
We propose Redundancy-aware KV Cache Compression for Reasoning models (R-KV)<n>R-KV preserves nearly 100% of the full KV cache performance using only 10% of the KV cache.<n>Remarkably, R-KV even achieves 105% of full KV cache performance with 16% of the KV cache.
arXiv Detail & Related papers (2025-05-30T02:03:24Z) - FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference [12.79375490077812]
FreeKV is an algorithm-system co-optimization framework to enhance KV retrieval efficiency while preserving accuracy.<n>Experiments demonstrate that FreeKV achieves near-lossless accuracy across various scenarios and models, delivering up to 13$times$ speedup.
arXiv Detail & Related papers (2025-05-19T13:36:45Z) - QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV Cache [67.84112700032007]
Large Language Models (LLMs) are increasingly being deployed on edge devices for long-context settings.<n>In these scenarios, the Key-Value ( KV) cache is the primary bottleneck in terms of both GPU memory and latency.<n>We propose a novel self-speculative decoding framework, QuantSpec, where the draft model shares the architecture of the target model but employs a hierarchical 4-bit quantized KV cache and 4-bit quantized weights for acceleration.
arXiv Detail & Related papers (2025-02-05T20:43:48Z) - FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation [14.33163594016033]
Large language models (LLMs) require substantial prefill computation and key-value ( KV) cache.<n>Recent works that compress KV caches with prefill acceleration reduce this cost but inadvertently tie the prefill compute reduction to the decoding KV budget.<n>FastKV is a KV cache compression framework designed to reduce latency in both prefill and decoding by leveraging the stabilization of token importance in later layers.
arXiv Detail & Related papers (2025-02-03T05:25:09Z) - ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference [61.412894960600205]
Large Language Models (LLMs) require significant GPU memory when processing long texts.<n>ChunkKV reimagines KV cache compression by treating semantic chunks as basic compression units.<n>Result: ChunkKV outperforms state-of-the-art methods by up to 8.7% in precision.
arXiv Detail & Related papers (2025-02-01T03:49:47Z) - KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing [58.29726147780976]
We propose a plug-and-play method called textit KVSharer, which shares the KV cache between layers to achieve layer-wise compression.
Experiments show that textit KVSharer can reduce KV cache computation by 30%, thereby lowering memory consumption.
We verify that textit KVSharer is compatible with existing intra-layer KV cache compression methods, and combining both can further save memory.
arXiv Detail & Related papers (2024-10-24T08:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.