Cost-Aware Contrastive Routing for LLMs
- URL: http://arxiv.org/abs/2508.12491v1
- Date: Sun, 17 Aug 2025 20:16:44 GMT
- Title: Cost-Aware Contrastive Routing for LLMs
- Authors: Reza Shirkavand, Shangqian Gao, Peiran Yu, Heng Huang,
- Abstract summary: We introduce Cost-Spectrum Contrastive Routing (CSCR), a lightweight framework that maps both prompts and models into a shared embedding space.<n>CSCR consistently outperforms baselines, improving the accuracy-cost tradeoff by up to 25%.
- Score: 56.94921736486255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study cost-aware routing for large language models across diverse and dynamic pools of models. Existing approaches often overlook prompt-specific context, rely on expensive model profiling, assume a fixed set of experts, or use inefficient trial-and-error strategies. We introduce Cost-Spectrum Contrastive Routing (CSCR), a lightweight framework that maps both prompts and models into a shared embedding space to enable fast, cost-sensitive selection. CSCR uses compact, fast-to-compute logit footprints for open-source models and perplexity fingerprints for black-box APIs. A contrastive encoder is trained to favor the cheapest accurate expert within adaptive cost bands. At inference time, routing reduces to a single k-NN lookup via a FAISS index, requiring no retraining when the expert pool changes and enabling microsecond latency. Across multiple benchmarks, CSCR consistently outperforms baselines, improving the accuracy-cost tradeoff by up to 25%, while generalizing robustly to unseen LLMs and out-of-distribution prompts.
Related papers
- MMR-Bench: A Comprehensive Benchmark for Multimodal LLM Routing [41.77627136743721]
In practical deployments, workloads span lightweight OCR to complex multimodal reasoning.<n>routing is nontrivial due to modality fusion, wide variation in computational cost across models, and the absence of a standardized, budget-aware evaluation.<n>We present MMR-Bench, a unified benchmark that isolates the multimodal routing problem and enables comparison under fixed candidate sets and cost models.
arXiv Detail & Related papers (2026-01-25T12:44:14Z) - Don't Start Over: A Cost-Effective Framework for Migrating Personalized Prompts Between LLMs [51.79252689855809]
Personalization in Large Language Models (LLMs) often relies on user-specific soft prompts.<n>We propose the Prompt-level User Migration Adapter (PUMA), a framework to efficiently migrate personalized prompts across incompatible models.<n>Experiments on three large-scale datasets show our method matches or even surpasses the performance of retraining from scratch, reducing computational cost by up to 98%.
arXiv Detail & Related papers (2026-01-17T12:30:31Z) - Confidence-Guided Stepwise Model Routing for Cost-Efficient Reasoning [20.41220110321494]
We propose Confidence-Guided Stepwise Model Routing for Cost-Efficient Reasoning.<n>STEER is a domain-agnostic framework that performs fine-grained, step-level routing between smaller and larger language models.<n>Our results establish model-internal confidence as a robust, domain-agnostic signal for model routing.
arXiv Detail & Related papers (2025-11-09T02:33:08Z) - xRouter: Training Cost-Aware LLMs Orchestration System via Reinforcement Learning [104.63494870852894]
We present x, a tool-calling-based routing system in which a learned router can either answer directly or invoke one or more external models.<n>Our implementation encompasses the full reinforcement learning framework, including reward and cost accounting.<n>Across diverse benchmarks, x achieves strong cost-performance trade-offs.
arXiv Detail & Related papers (2025-10-09T16:52:01Z) - Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs [69.2486294522259]
BaRP is a Bandit Routing-feedback with Preferences approach that trains under the same partial-feedback restriction as deployment.<n> Framed as a contextual bandit over prompt features and a user preference vector, our method simulates an online feedback setting during training and adapts its routing decisions to each new prompt.
arXiv Detail & Related papers (2025-10-08T18:24:59Z) - SATER: A Self-Aware and Token-Efficient Approach to Routing and Cascading [39.20076289493037]
We introduce SATER, a dual-mode compatible approach that fine-tunes models through shortest-response preference optimization and a confidence-aware rejection mechanism.<n> SATER significantly reduces redundant outputs and response times, while improving both the performance of pre-generation routing and the efficiency of cascade routing.
arXiv Detail & Related papers (2025-10-04T19:55:36Z) - CustomIR: Unsupervised Fine-Tuning of Dense Embeddings for Known Document Corpora [0.0]
CustomIR is a framework for unsupervised adaptation of language embedding models to domain-specific corpora.<n>Our experiments show that CustomIR consistently improves retrieval effectiveness with small models gaining up to 2.3 points in Recall@10.<n>These results highlight that targeted synthetic fine-tuning offers a scalable and cost-efficient strategy for increasing domain-specific performance.
arXiv Detail & Related papers (2025-09-30T00:25:47Z) - $\texttt{SPECS}$: Faster Test-Time Scaling through Speculative Drafts [55.231201692232894]
$textttSPECS$ is a latency-aware test-time scaling method inspired by speculative decoding.<n>Our results show that $textttSPECS$matches or surpasses beam search accuracy while reducing latency by up to $sim$19.1%.
arXiv Detail & Related papers (2025-06-15T05:50:05Z) - SkewRoute: Training-Free LLM Routing for Knowledge Graph Retrieval-Augmented Generation via Score Skewness of Retrieved Context [19.447729423696096]
Large language models excel at many tasks but often incur high inference costs during deployment.<n>A promising solution to balance performance and cost is LLM routing, which directs simple queries to smaller LLMs and complex ones to larger LLMs.<n>We propose a novel, training-free routing framework, the first tailored to KG-RAG that effectively balances performance and cost in a plug-and-play manner.
arXiv Detail & Related papers (2025-05-28T14:45:56Z) - syftr: Pareto-Optimal Generative AI [40.80352098169579]
syftr is a framework that performs efficient multi-objective search over a broad space of agentic and non-agentic RAG configurations.<n>Syftr finds flows which are on average approximately 9 times cheaper while preserving most of the accuracy of the most accurate flows.
arXiv Detail & Related papers (2025-05-26T17:43:13Z) - LightRouter: Towards Efficient LLM Collaboration with Minimal Overhead [19.573553157421774]
Light is a novel framework designed to systematically select and integrate a small subset of LLMs from a larger pool.<n>Experiments demonstrate that Light matches or outperforms widely-used ensemble baselines, achieving up to a 25% improvement in accuracy.<n>This work introduces a practical approach for efficient LLM selection and provides valuable insights into optimal strategies for model combination.
arXiv Detail & Related papers (2025-05-22T04:46:04Z) - How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities [62.474732677086855]
Large language model (LLM) routing has emerged as a crucial strategy for balancing computational costs with performance.<n>We propose the DSC benchmark: Diverse, Simple, and Categorized, an evaluation framework that categorizes router performance across a broad spectrum of query types.
arXiv Detail & Related papers (2025-03-20T19:52:30Z) - Fast or Better? Balancing Accuracy and Cost in Retrieval-Augmented Generation with Flexible User Control [52.405085773954596]
Retrieval-Augmented Generation has emerged as a powerful approach to mitigate large language model hallucinations.<n>Existing RAG frameworks often apply retrieval indiscriminately,leading to inefficiencies-over-retrieving.<n>We introduce a novel user-controllable RAG framework that enables dynamic adjustment of the accuracy-cost trade-off.
arXiv Detail & Related papers (2025-02-17T18:56:20Z) - FTP: A Fine-grained Token-wise Pruner for Large Language Models via Token Routing [17.01412432658081]
Large language models (LLMs) have demonstrated superior performance across various tasks by adhering to scaling laws.<n>We propose a fine-grained token-wise pruning approach for the LLMs, which presents a learnable router to adaptively identify the less important tokens.<n>Our approach achieves state-of-the-art (SOTA) pruning results, surpassing other existing pruning methods.
arXiv Detail & Related papers (2024-12-16T07:09:46Z) - Mixture of Nested Experts: Adaptive Processing of Visual Tokens [49.43920770789789]
Vision Transformer (ViT) based models fail to capitalize on inherent redundancy, leading to higher computational costs.
We present Mixture of Nested Experts (MoNE), which utilizes a nested structure for experts, wherein individual experts fall on an increasing compute-accuracy curve.
We validate our approach on standard image and video datasets - ImageNet-21K, Kinetics400, and Something-Something-v2.
arXiv Detail & Related papers (2024-07-29T13:19:31Z) - Model Cascading for Code: A Cascaded Black-Box Multi-Model Framework for Cost-Efficient Code Completion with Self-Testing [20.445496441396028]
We introduce a novel framework combining model cascading and inference-time self-testing algorithms to find multiple near-optimal self-testing options on the cost-accuracy tradeoff.<n>Our approach leverages self-generated tests to both enhance accuracy and evaluate model cascading decisions.<n> Experimental results show that our cascading approach reduces costs by an average of 26%, and up to 70% in the best case.
arXiv Detail & Related papers (2024-05-24T16:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.