Lookahead Routing for Large Language Models
- URL: http://arxiv.org/abs/2510.19506v1
- Date: Wed, 22 Oct 2025 12:00:21 GMT
- Title: Lookahead Routing for Large Language Models
- Authors: Canbin Huang, Tianyuan Shi, Yuhua Zhu, Ruijun Chen, Xiaojun Quan,
- Abstract summary: Lookahead is a routing framework that "foresees" potential model outputs and uses these predictions to guide model selection.<n> Empirical evaluations across seven public benchmarks show that Lookahead consistently outperforms existing routing baselines.
- Score: 24.082620717301477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language model (LLM) routers improve the efficiency of multi-model systems by directing each query to the most appropriate model while leveraging the diverse strengths of heterogeneous LLMs. Most existing approaches frame routing as a classification problem based solely on the input query. While this reduces overhead by avoiding inference across all models, it overlooks valuable information that could be gleaned from potential outputs and fails to capture implicit intent or contextual nuances that often emerge only during response generation. These limitations can result in suboptimal routing decisions, particularly for complex or ambiguous queries that require deeper semantic understanding. To address this challenge, we propose Lookahead, a routing framework that "foresees" potential model outputs by predicting their latent representations and uses these predictions to guide model selection, thus enabling more informed routing without full inference. Within this framework, we implement two approaches based on causal and masked language models. Empirical evaluations across seven public benchmarks - spanning instruction following, mathematical reasoning, and code generation - show that Lookahead consistently outperforms existing routing baselines, achieving an average performance gain of 7.7% over the state-of-the-art. Our code is available at https://github.com/huangcb01/lookahead-routing.
Related papers
- On the Out-of-Distribution Generalization of Reasoning in Multimodal LLMs for Simple Visual Planning Tasks [56.98385132295952]
We evaluate how well chain-of-thought approaches generalize on a simple planning task.<n>We find that reasoning traces which combine multiple text formats yield the best (and non-trivial) OOD generalization.<n> purely text-based models consistently outperform those utilizing image-based inputs.
arXiv Detail & Related papers (2026-02-17T09:51:40Z) - HierRouter: Coordinated Routing of Specialized Large Language Models via Reinforcement Learning [11.03159148013318]
Large Language Models (LLMs) deliver state-of-the-art performance across many tasks but impose high computational and memory costs.<n>We propose Hier, a hierarchical routing approach that dynamically assembles inference pipelines from a pool of specialized, lightweight language models.
arXiv Detail & Related papers (2025-11-13T02:12:14Z) - ICL-Router: In-Context Learned Model Representations for LLM Routing [30.759446235510467]
We propose a novel routing method using in-context vectors to represent model capabilities.<n>Our method achieves state-of-the-art routing performance in both in-distribution and out-of-distribution tasks.
arXiv Detail & Related papers (2025-10-10T06:47:37Z) - Every Step Counts: Decoding Trajectories as Authorship Fingerprints of dLLMs [63.82840470917859]
We show that the decoding mechanism of dLLMs can be used as a powerful tool for model attribution.<n>We propose a novel information extraction scheme called the Directed Decoding Map (DDM), which captures structural relationships between decoding steps and better reveals model-specific behaviors.
arXiv Detail & Related papers (2025-10-02T06:25:10Z) - Arch-Router: Aligning LLM Routing with Human Preferences [1.859931123372708]
routing has become an essential technique to operationalize the use of different models.<n>We propose a preference-aligned routing framework that guides model selection by matching queries to user-defined domains.<n>Our approach captures subjective evaluation criteria and makes routing decisions more transparent and flexible.
arXiv Detail & Related papers (2025-06-19T23:57:41Z) - RAGRouter: Learning to Route Queries to Multiple Retrieval-Augmented Language Models [45.58601993849455]
Retrieval-Augmented Generation (RAG) significantly improves the performance of Large Language Models (LLMs) on knowledge-intensive tasks.<n>We observe that external documents dynamically affect LLMs' ability to answer queries, while existing routing methods exhibit suboptimal performance in RAG scenarios.<n>We propose RAG, a parametric RAG-aware routing design, which leverages document embeddings and RAG capability embeddings with contrastive learning to capture knowledge representation shifts.
arXiv Detail & Related papers (2025-05-29T03:44:56Z) - How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities [62.474732677086855]
Large language model (LLM) routing has emerged as a crucial strategy for balancing computational costs with performance.<n>We propose the DSC benchmark: Diverse, Simple, and Categorized, an evaluation framework that categorizes router performance across a broad spectrum of query types.
arXiv Detail & Related papers (2025-03-20T19:52:30Z) - ORI: O Routing Intelligence [0.7493096930372414]
Single large language models (LLMs) often fall short when faced with the ever-growing range of tasks.<n>We propose ORI (O Routing Intelligence), a dynamic framework that leverages a set of LLMs.<n>By intelligently routing queries, ORI outperforms the strongest individual models by up to 2.7 points on MMLU and 1.8 points on MuSR.
arXiv Detail & Related papers (2025-02-14T10:00:20Z) - Universal Model Routing for Efficient LLM Inference [69.86195589350264]
Model routing is a technique for reducing the inference cost of large language models (LLMs)<n>We propose UniRoute, a new approach to the problem of dynamic routing, where new, previously unobserved LLMs are available at test time.<n>We show that these are estimates of a theoretically optimal routing rule, and quantify their errors via an excess risk bound.
arXiv Detail & Related papers (2025-02-12T20:30:28Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - CART: A Generative Cross-Modal Retrieval Framework with Coarse-To-Fine Semantic Modeling [53.97609687516371]
Cross-modal retrieval aims to search for instances, which are semantically related to the query through the interaction of different modal data.<n>Traditional solutions utilize a single-tower or dual-tower framework to explicitly compute the score between queries and candidates.<n>We propose a generative cross-modal retrieval framework (CART) based on coarse-to-fine semantic modeling.
arXiv Detail & Related papers (2024-06-25T12:47:04Z) - DCR: Divide-and-Conquer Reasoning for Multi-choice Question Answering with LLMs [9.561022942046279]
We propose Divide and Conquer Reasoning (DCR) to enhance the reasoning capability of large language models (LLMs)
We first categorize questions into two subsets based on confidence score ($mathcalCS$), which is estimated by statistical frequency of generated answers.
In particular, we first categorize questions into two subsets based on confidence score ($mathcalCS$), which is estimated by statistical frequency of generated answers.
arXiv Detail & Related papers (2024-01-10T14:38:46Z) - Deep Model Reassembly [60.6531819328247]
We explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy)
The goal of DeRy is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks.
We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning.
arXiv Detail & Related papers (2022-10-24T10:16:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.