Improving Neural Retrieval with Attribution-Guided Query Rewriting
- URL: http://arxiv.org/abs/2602.11841v1
- Date: Thu, 12 Feb 2026 11:34:06 GMT
- Title: Improving Neural Retrieval with Attribution-Guided Query Rewriting
- Authors: Moncef Garouani, Josiane Mothe,
- Abstract summary: Underspecified or ambiguous queries can misdirect ranking even when relevant documents exist.<n>We propose an attribution-guided query rewriting method that uses token-level explanations to guide rewriting.<n>The resulting rewrites consistently improve retrieval effectiveness over strong baselines.
- Score: 3.1153758106426603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural retrievers are effective but brittle: underspecified or ambiguous queries can misdirect ranking even when relevant documents exist. Existing approaches address this brittleness only partially: LLMs rewrite queries without retriever feedback, and explainability methods identify misleading tokens but are used for post-hoc analysis. We close this loop and propose an attribution-guided query rewriting method that uses token-level explanations to guide query rewriting. For each query, we compute gradient-based token attributions from the retriever and then use these scores as soft guidance in a structured prompt to an LLM that clarifies weak or misleading query components while preserving intent. Evaluated on BEIR collections, the resulting rewrites consistently improve retrieval effectiveness over strong baselines, with larger gains for implicit or ambiguous information needs.
Related papers
- Hint-Augmented Re-ranking: Efficient Product Search using LLM-Based Query Decomposition [20.966359103135762]
We show that LLMs can uncover latent intent behind superlatives in e-commerce queries.<n>Our approach decomposes queries into attribute-value hints generated concurrently with retrieval.<n>Our method improves search performanc eby 10.9 points in MAP and ranking by 5.9 points in MRR over baselines.
arXiv Detail & Related papers (2025-11-17T23:53:25Z) - Reasoning-enhanced Query Understanding through Decomposition and Interpretation [87.56450566014625]
ReDI is a Reasoning-enhanced approach for query understanding through Decomposition and Interpretation.<n>We compiled a large-scale dataset of real-world complex queries from a major search engine.<n> Experiments on BRIGHT and BEIR demonstrate that ReDI consistently surpasses strong baselines in both sparse and dense retrieval paradigms.
arXiv Detail & Related papers (2025-09-08T10:58:42Z) - Constructing Set-Compositional and Negated Representations for First-Stage Ranking [23.123116796159717]
We introduce Disentangled Negation that penalizes only the negated parts of a query, and a Combined Pseudo-Term approach that enhances LSRs ability to handle intersections.<n>We find that our zero-shot approach is competitive and often outperforms retrievers fine-tuned on compositional data.
arXiv Detail & Related papers (2025-01-13T20:32:38Z) - R-Bot: An LLM-based Query Rewrite System [20.909806427953264]
We propose R-Bot, an LLM-based query rewrite system with a systematic approach.<n>We first design a multi-source rewrite evidence preparation pipeline to generate query rewrite evidences.<n>We then propose a hybrid-semantics retrieval method that combines structural and semantic analysis.<n>We conduct comprehensive experiments on real-world datasets and widely used benchmarks, and demonstrate the superior performance of our system.
arXiv Detail & Related papers (2024-12-02T16:13:04Z) - Adaptive Query Rewriting: Aligning Rewriters through Marginal Probability of Conversational Answers [66.55612528039894]
AdaQR is a framework for training query rewriting models with limited rewrite annotations from seed datasets and completely no passage label.
A novel approach is proposed to assess retriever's preference for these candidates by the probability of answers conditioned on the conversational query.
arXiv Detail & Related papers (2024-06-16T16:09:05Z) - RaFe: Ranking Feedback Improves Query Rewriting for RAG [83.24385658573198]
We propose a framework for training query rewriting models free of annotations.
By leveraging a publicly available reranker, oursprovides feedback aligned well with the rewriting objectives.
arXiv Detail & Related papers (2024-05-23T11:00:19Z) - Context Aware Query Rewriting for Text Rankers using LLM [5.164642900490078]
We analyze the utility of large-language models for improved query rewriting for text ranking tasks.
We adopt a simple, yet surprisingly effective, approach called context aware query rewriting (CAR)
We find that fine-tuning a ranker using re-written queries offers a significant improvement of up to 33% on the passage ranking task and up to 28% on the document ranking task.
arXiv Detail & Related papers (2023-08-31T14:19:50Z) - Decomposing Complex Queries for Tip-of-the-tongue Retrieval [72.07449449115167]
Complex queries describe content elements (e.g., book characters or events), information beyond the document text.
This retrieval setting, called tip of the tongue (TOT), is especially challenging for models reliant on lexical and semantic overlap between query and document text.
We introduce a simple yet effective framework for handling such complex queries by decomposing the query into individual clues, routing those as sub-queries to specialized retrievers, and ensembling the results.
arXiv Detail & Related papers (2023-05-24T11:43:40Z) - Query Rewriting for Retrieval-Augmented Large Language Models [139.242907155883]
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline.
This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs.
arXiv Detail & Related papers (2023-05-23T17:27:50Z) - Large Language Models are Strong Zero-Shot Retriever [89.16756291653371]
We propose a simple method that applies a large language model (LLM) to large-scale retrieval in zero-shot scenarios.
Our method, the Language language model as Retriever (LameR), is built upon no other neural models but an LLM.
arXiv Detail & Related papers (2023-04-27T14:45:55Z) - Improving Query Representations for Dense Retrieval with Pseudo
Relevance Feedback [29.719150565643965]
This paper proposes ANCE-PRF, a new query encoder that uses pseudo relevance feedback (PRF) to improve query representations for dense retrieval.
ANCE-PRF uses a BERT encoder that consumes the query and the top retrieved documents from a dense retrieval model, ANCE, and it learns to produce better query embeddings directly from relevance labels.
Analysis shows that the PRF encoder effectively captures the relevant and complementary information from PRF documents, while ignoring the noise with its learned attention mechanism.
arXiv Detail & Related papers (2021-08-30T18:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.