LightRetriever: A LLM-based Hybrid Retrieval Architecture with 1000x Faster Query Inference
- URL: http://arxiv.org/abs/2505.12260v3
- Date: Tue, 05 Aug 2025 08:01:24 GMT
- Title: LightRetriever: A LLM-based Hybrid Retrieval Architecture with 1000x Faster Query Inference
- Authors: Guangyuan Ma, Yongliang Ma, Xuanrui Gou, Zhenpeng Su, Ming Zhou, Songlin Hu,
- Abstract summary: Large Language Models (LLMs)-based text retrieval retrieves documents relevant to search queries based on vector similarities.<n>We propose LightRetriever, a novel LLM-based retriever with extremely lightweight query encoders.<n>Our method achieves over 1000x speedup in query encoding and over 10x increase in end-to-end retrieval throughput.
- Score: 31.040756207765796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs)-based text retrieval retrieves documents relevant to search queries based on vector similarities. Documents are pre-encoded offline, while queries arrive in real-time, necessitating an efficient online query encoder. Although LLMs significantly enhance retrieval capabilities, serving deeply parameterized LLMs slows down query inference throughput and increases demands for online deployment resources. In this paper, we propose LightRetriever, a novel LLM-based retriever with extremely lightweight query encoders. Our method retains a full-sized LLM for document encoding, but reduces the workload of query encoding to no more than an embedding lookup. Compared to serving a full LLM on an A800 GPU, our method achieves over 1000x speedup in query encoding and over 10x increase in end-to-end retrieval throughput. Extensive experiments on large-scale retrieval benchmarks show that LightRetriever generalizes well across diverse tasks, maintaining an average of 95% retrieval performance.
Related papers
- EnrichIndex: Using LLMs to Enrich Retrieval Indices Offline [47.064685680644345]
Real-world retrieval systems are often required to implicitly reason whether a document is relevant.<n>Large language models (LLMs) hold great potential in identifying such implied relevance by leveraging their reasoning skills.<n>We introduce EnrichIndex, a retrieval approach which uses the LLM offline to build semantically-enriched retrieval indices.
arXiv Detail & Related papers (2025-04-04T17:08:46Z) - Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning [50.419872452397684]
Search-R1 is an extension of reinforcement learning for reasoning frameworks.<n>It generates search queries during step-by-step reasoning with real-time retrieval.<n>It improves performance by 41% (Qwen2.5-7B) and 20% (Qwen2.5-3B) over various RAG baselines.
arXiv Detail & Related papers (2025-03-12T16:26:39Z) - LightPAL: Lightweight Passage Retrieval for Open Domain Multi-Document Summarization [9.739781953744606]
Open-Domain Multi-Document Summarization (ODMDS) is the task of generating summaries from large document collections in response to user queries.
Traditional retrieve-then-summarize approaches fall short for open-ended queries in ODMDS tasks.
We propose LightPAL, a lightweight passage retrieval method for ODMDS.
arXiv Detail & Related papers (2024-06-18T10:57:27Z) - PromptReps: Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval [76.50690734636477]
We propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus.
The retrieval system harnesses both dense text embedding and sparse bag-of-words representations.
arXiv Detail & Related papers (2024-04-29T04:51:30Z) - Optimizing LLM Queries in Relational Data Analytics Workloads [50.95919232839785]
Batch data analytics is a growing application for Large Language Models (LLMs)<n>LLMs enable users to perform a wide range of natural language tasks, such as classification, entity extraction, and translation, over large datasets.<n>We propose novel techniques that can significantly reduce the cost of LLM calls for relational data analytics workloads.
arXiv Detail & Related papers (2024-03-09T07:01:44Z) - LLatrieval: LLM-Verified Retrieval for Verifiable Generation [67.93134176912477]
Verifiable generation aims to let the large language model (LLM) generate text with supporting documents.
We propose LLatrieval (Large Language Model Verified Retrieval), where the LLM updates the retrieval result until it verifies that the retrieved documents can sufficiently support answering the question.
Experiments show that LLatrieval significantly outperforms extensive baselines and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-11-14T01:38:02Z) - Query Rewriting for Retrieval-Augmented Large Language Models [139.242907155883]
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline.
This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs.
arXiv Detail & Related papers (2023-05-23T17:27:50Z) - Large Language Models are Strong Zero-Shot Retriever [89.16756291653371]
We propose a simple method that applies a large language model (LLM) to large-scale retrieval in zero-shot scenarios.
Our method, the Language language model as Retriever (LameR), is built upon no other neural models but an LLM.
arXiv Detail & Related papers (2023-04-27T14:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.