Context-Aware CodeLLM Eviction for AI-assisted Coding
- URL: http://arxiv.org/abs/2506.18796v1
- Date: Mon, 23 Jun 2025 16:03:32 GMT
- Title: Context-Aware CodeLLM Eviction for AI-assisted Coding
- Authors: Kishanthan Thangarajah, Boyuan Chen, Shi Chang, Ahmed E. Hassan,
- Abstract summary: AI-assisted coding tools powered by Code Large Language Models (CodeLLMs) are increasingly integrated into modern software development.<n>To address concerns around privacy, latency, and model customization, many enterprises opt to self-host these models.<n>This paper presents CACE, a novel context-aware model eviction strategy designed specifically to optimize self-hosted CodeLLM serving under resource constraints.
- Score: 6.199193051670653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-assisted coding tools powered by Code Large Language Models (CodeLLMs) are increasingly integrated into modern software development workflows. To address concerns around privacy, latency, and model customization, many enterprises opt to self-host these models. However, the diversity and growing number of CodeLLMs, coupled with limited accelerator memory, introduce practical challenges in model management and serving efficiency. This paper presents CACE, a novel context-aware model eviction strategy designed specifically to optimize self-hosted CodeLLM serving under resource constraints. Unlike traditional eviction strategies based solely on recency (e.g., Least Recently Used), CACE leverages multiple context-aware factors, including model load time, task-specific latency sensitivity, expected output length, and recent usage and future demand tracked through a sliding window. We evaluate CACE using realistic workloads that include both latency-sensitive code completion and throughput-intensive code reasoning tasks. Our experiments show that CACE reduces Time-to-First-Token (TTFT) and end-to-end (E2E) latency, while significantly lowering the number of model evictions compared to state-of-the-art systems. Ablation studies further demonstrate the importance of multi-factor eviction in balancing responsiveness and resource efficiency. This work contributes practical strategies for deploying scalable, low-latency AI coding assistants in real-world software engineering environments.
Related papers
- ORMind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research [53.736407871322314]
We introduce ORMind, a cognitive-inspired framework that enhances optimization through counterfactual reasoning.<n>Our approach emulates human cognition, implementing an end-to-end workflow that transforms requirements into mathematical models and executable code.<n>It is currently being tested internally in Lenovo's AI Assistant, with plans to enhance optimization capabilities for both business and consumer customers.
arXiv Detail & Related papers (2025-06-02T05:11:21Z) - Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development [65.94639060883475]
We propose a resource-aware multi-agent system -- Co-Saving.<n>Our key innovation is the introduction of "shortcuts"<n>Compared to the state-of-the-art MAS ChatDev, our method achieves an average reduction of 50.85% in token usage.
arXiv Detail & Related papers (2025-05-28T02:23:53Z) - Deploying Large AI Models on Resource-Limited Devices with Split Federated Learning [39.73152182572741]
This paper proposes a novel framework, named Quantized Split Federated Fine-Tuning Large AI Model (SFLAM)<n>By partitioning the training load between edge devices and servers, SFLAM can facilitate the operation of large models on devices.<n>SFLAM incorporates quantization management, power control, and bandwidth allocation strategies to enhance training efficiency.
arXiv Detail & Related papers (2025-04-12T07:55:11Z) - Thinking Longer, Not Larger: Enhancing Software Engineering Agents via Scaling Test-Time Compute [61.00662702026523]
We propose a unified Test-Time Compute scaling framework that leverages increased inference-time instead of larger models.<n>Our framework incorporates two complementary strategies: internal TTC and external TTC.<n>We demonstrate our textbf32B model achieves a 46% issue resolution rate, surpassing significantly larger models such as DeepSeek R1 671B and OpenAI o1.
arXiv Detail & Related papers (2025-03-31T07:31:32Z) - SLA-Awareness for AI-assisted coding [6.199193051670653]
We propose Coding Assistant Task Orchestrator (CATO) to serve a diverse assortment of coding tasks while meeting latency requirements and maximizing resource utilization.<n>Our experiments demonstrate that when all types of coding tasks were served simultaneously, for TTFT-critical tasks, CATO improves overall Goodput rate and resource utilization by up to 10% and 41.1%, respectively.
arXiv Detail & Related papers (2025-03-25T17:38:28Z) - DWIM: Towards Tool-aware Visual Reasoning via Discrepancy-aware Workflow Generation & Instruct-Masking Tuning [57.285435980459205]
compositional visual reasoning approaches have shown promise as more effective strategies than end-to-end VR methods.<n>We propose DWIM: Discrepancy-aware training generation, which assesses tool usage and extracts more viable for training.<n>Instruct-Masking fine-tuning, which guides the model to only clone effective actions, enabling the generation of more practical solutions.
arXiv Detail & Related papers (2025-03-25T01:57:59Z) - DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting [59.57151419673759]
Speculative decoding presents a draft-then-verify framework that reduces generation latency while maintaining output distribution fidelity.<n>We propose DuoDecoding, a novel approach that strategically deploys the draft and target models on the CPU and GPU respectively.<n>Our method incorporates a hardware-aware optimal draft budget to minimize idle times and employs dynamic multi-sequence drafting to enhance draft quality.
arXiv Detail & Related papers (2025-03-02T08:27:48Z) - DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [70.91804882618243]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.<n>We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.<n>Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - Accelerating AIGC Services with Latent Action Diffusion Scheduling in Edge Networks [27.961536719427205]
Current AIGC models primarily focus on content quality within a centralized framework, resulting in a high service delay and negative user experiences.<n>We propose LAD-TS, a novel Latent Action Diffusion-based Task Scheduling method that orchestrates multiple edge servers for expedited AIGC services.<n>We also develop DEdgeAI, a prototype edge system with a refined AIGC model deployment to implement and evaluate our LAD-TS method.
arXiv Detail & Related papers (2024-12-24T06:40:13Z) - GUIDE: A Global Unified Inference Engine for Deploying Large Language Models in Heterogeneous Environments [1.0558515062670693]
Large language models (LLMs) in real-world scenarios remains a critical challenge.<n>These challenges often lead to inefficiencies in memory utilization, latency, and throughput.<n>We develop a framework to address these issues, achieving prediction errors between 9.9% and 42.3% for key metrics such as batch latency, TTFT, and decode throughput.
arXiv Detail & Related papers (2024-12-06T05:46:43Z) - A Graph Neural Networks based Framework for Topology-Aware Proactive SLA
Management in a Latency Critical NFV Application Use-case [0.34376560669160383]
Recent advancements in 5G and 6G have led to the emergence of latency-critical applications delivered via a Network-series (NFV) enabled paradigm.
We propose a proactive SLA management framework leveraging Graph Neural Networks (GNN) and Deep Reinforcement Learning (DRL) to balance the trade-off between efficiency and reliability.
arXiv Detail & Related papers (2022-11-10T23:22:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.