Cachemir: Fully Homomorphic Encrypted Inference of Generative Large Language Model with KV Cache
- URL: http://arxiv.org/abs/2602.11470v1
- Date: Thu, 12 Feb 2026 01:01:38 GMT
- Title: Cachemir: Fully Homomorphic Encrypted Inference of Generative Large Language Model with KV Cache
- Authors: Ye Yu, Yifan Zhou, Yi Chen, Pedro Soto, Wenjie Xiong, Meng Li,
- Abstract summary: Cachemir is a KV Cache Accelerated Homomorphic Encrypted LLM Inference Regime.<n>We demonstrate that Cachemir achieves $48.83times$ and $67.16times$ speedup over MOAI (ICML'25) and THOR (CCS'25) respectively on CPU and consumes less than 100 seconds on GPU to generate an output token for Llama-3-8B.
- Score: 15.25568382221441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative large language models (LLMs) have revolutionized multiple domains. Modern LLMs predominantly rely on an autoregressive decoding strategy, which generates output tokens sequentially and employs a key-value cache (KV cache) to avoid redundant computation. However, the widespread deployment of LLMs has raised serious privacy concerns, as users are feeding all types of data into the model, motivating the development of secure inference frameworks based on fully homomorphic encryption (FHE). A major limitation of existing FHE-based frameworks is their inability to effectively integrate the KV cache, resulting in prohibitively high latency for autoregressive decoding. In this paper, we propose Cachemir, a KV Cache Accelerated Homomorphic Encrypted LLM Inference Regime to overcome this limitation. Cachemir comprises three key technical contributions: 1) a set of novel HE packing algorithms specifically designed to leverage the computational advantages of the KV cache; 2) an interleaved replicated packing algorithm to efficiently compute the vector-matrix multiplications that result from using the KV cache in Transformer linear layers; and 3) an augmented bootstrapping placement strategy that accounts for the KV cache to minimize bootstrapping cost. We demonstrate that Cachemir achieves $48.83\times$ and $67.16\times$ speedup over MOAI (ICML'25) and THOR (CCS'25) respectively on CPU and consumes less than 100 seconds on GPU to generate an output token for Llama-3-8B.
Related papers
- PackCache: A Training-Free Acceleration Method for Unified Autoregressive Video Generation via Compact KV-Cache [61.57938553036056]
We introduce PackCache, a training-free KV-cache management method that compacts the KV cache through three coordinated mechanisms.<n>In terms of efficiency, PackCache accelerates end-to-end generation by 1.7-2.2x on 48-frame long sequences.
arXiv Detail & Related papers (2026-01-07T19:51:06Z) - Joint Encoding of KV-Cache Blocks for Scalable LLM Serving [3.3230675313521716]
Existing KV-cache compression methods rely on rigids, disrupt tensor layouts, or require specialized compute.<n>We propose joint encoding of KV-cache blocks, which fuses similar blocks across requests and input chunks into shared representations.<n>This alleviates the KV-cache memory bottleneck, supporting high-concurrency serving without specialized hardware.
arXiv Detail & Related papers (2026-01-06T14:50:58Z) - CXL-SpecKV: A Disaggregated FPGA Speculative KV-Cache for Datacenter LLM Serving [5.216774377033164]
Large Language Models (LLMs) have revolutionized natural language processing tasks.<n>LLMs face challenges due to the massive memory requirements of key-value ( KV) caches.<n>We propose textbfCXL-SpecKV, a novel disaggregated KV-cache architecture.
arXiv Detail & Related papers (2025-12-11T15:40:36Z) - Mask Tokens as Prophet: Fine-Grained Cache Eviction for Efficient dLLM Inference [27.2461395361407]
Diffusion large language models (dLLMs) present a promising alternative to dominant autoregressive models (ARMs)<n>Existing cache eviction strategies are designed for ARMs and ignore the unique characteristics of dLLMs, thus leading to unsatisfactory performance.<n>We introduce MaskKV, a training-free cache eviction framework tailored to dLLMs.
arXiv Detail & Related papers (2025-10-10T12:01:16Z) - CommVQ: Commutative Vector Quantization for KV Cache Compression [50.37946553931796]
We propose Commutative Vector Quantization (CommVQ) to significantly reduce memory usage for long-context LLM inference.<n>We first introduce additive quantization with a lightweight encoder and codebook to compress the KV cache.<n>Our approach achieves high accuracy with additive quantization and low overhead via the RoPE-commutative codebook.
arXiv Detail & Related papers (2025-06-23T17:50:11Z) - dKV-Cache: The Cache for Diffusion Language Models [53.85291644298835]
Diffusion Language Models (DLMs) have been seen as a promising competitor for autoregressive language models.<n>We propose a KV-cache-like mechanism, delayed KV-Cache, for the denoising process of DLMs.<n>Our approach is motivated by the observation that different tokens have distinct representation dynamics throughout the diffusion process.
arXiv Detail & Related papers (2025-05-21T17:32:10Z) - QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV Cache [67.84112700032007]
Large Language Models (LLMs) are increasingly being deployed on edge devices for long-context settings.<n>In these scenarios, the Key-Value ( KV) cache is the primary bottleneck in terms of both GPU memory and latency.<n>We propose a novel self-speculative decoding framework, QuantSpec, where the draft model shares the architecture of the target model but employs a hierarchical 4-bit quantized KV cache and 4-bit quantized weights for acceleration.
arXiv Detail & Related papers (2025-02-05T20:43:48Z) - MPCache: MPC-Friendly KV Cache Eviction for Efficient Private LLM Inference [15.460864137509654]
We propose an accurate and MPC-friendly KV cache eviction framework, dubbed MPCache, for private large language model (LLM) inference.<n>MPCache consistently outperforms prior-art KV cache eviction baselines across different generation tasks.
arXiv Detail & Related papers (2025-01-12T13:18:04Z) - ThinK: Thinner Key Cache by Query-Driven Pruning [63.13363917871414]
Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications.<n>This paper focuses on the long-context scenario, addressing the inefficiencies in KV cache memory consumption during inference.<n>We propose ThinK, a novel query-dependent KV cache pruning method designed to minimize attention weight loss while selectively pruning the least significant channels.
arXiv Detail & Related papers (2024-07-30T17:59:08Z) - Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks [21.815661269986425]
We propose a novel KV cache merging approach, called KVMerger, to achieve adaptive KV cache compression for long-context tasks.
Our approach is inspired by the intriguing observation that key states exhibit high similarity at the token level within a single sequence.
We conduct extensive experiments to demonstrate the effectiveness of KVMerger for long-context tasks under constrained memory budgets.
arXiv Detail & Related papers (2024-07-11T12:50:42Z) - KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache [67.9776980972508]
We develop a tuning-free 2bit KV cache quantization algorithm named KIVI.
KIVI can enable Llama, Falcon, and Mistral models to maintain almost the same quality while using $mathbf2.6times$ less peak memory.
arXiv Detail & Related papers (2024-02-05T06:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.