Am I on the Right Track? What Can Predicted Query Performance Tell Us about the Search Behaviour of Agentic RAG
- URL: http://arxiv.org/abs/2507.10411v1
- Date: Mon, 14 Jul 2025 15:54:50 GMT
- Title: Am I on the Right Track? What Can Predicted Query Performance Tell Us about the Search Behaviour of Agentic RAG
- Authors: Fangzheng Tian, Jinyuan Fang, Debasis Ganguly, Zaiqiao Meng, Craig Macdonald,
- Abstract summary: This study examines the applicability of query performance prediction (QPP) within the recent Agentic RAG models Search-R1 and R1-Searcher.<n>We find that applying effective retrievers can achieve higher answer quality within a shorter reasoning process.
- Score: 35.16209722320604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agentic Retrieval-Augmented Generation (RAG) is a new paradigm where the reasoning model decides when to invoke a retriever (as a "tool") when answering a question. This paradigm, exemplified by recent research works such as Search-R1, enables the model to decide when to search and obtain external information. However, the queries generated by such Agentic RAG models and the role of the retriever in obtaining high-quality answers remain understudied. To this end, this initial study examines the applicability of query performance prediction (QPP) within the recent Agentic RAG models Search-R1 and R1-Searcher. We find that applying effective retrievers can achieve higher answer quality within a shorter reasoning process. Moreover, the QPP estimates of the generated queries, used as an approximation of their retrieval quality, are positively correlated with the quality of the final answer. Ultimately, our work is a step towards adaptive retrieval within Agentic RAG, where QPP is used to inform the model if the retrieved results are likely to be useful.
Related papers
- RAVine: Reality-Aligned Evaluation for Agentic Search [7.4420114967110385]
RAVine is a Reality-Aligned eValuation framework for agentic LLMs with search.<n> RAVine targets multi-point queries and long-form answers that better reflect user intents.<n>We benchmark a series of models using RAVine and derive several insights.
arXiv Detail & Related papers (2025-07-22T16:08:12Z) - FrugalRAG: Learning to retrieve and reason for multi-hop QA [10.193015391271535]
Large-scale fine-tuning is not needed to improve RAG metrics.<n>Supervised and RL-based fine-tuning can help RAG from the perspective of frugality.
arXiv Detail & Related papers (2025-07-10T11:02:13Z) - SPEAR: Subset-sampled Performance Evaluation via Automated Ground Truth Generation for RAG [1.908792985190258]
This paper proposes SEARA, which addresses evaluation data challenges through subset sampling techniques.<n>Based on real user queries, this method enables fully automated retriever evaluation at low cost.<n>We validate our method across classic RAG applications in rednote, including knowledge-based Q&A system and retrieval-based travel assistant.
arXiv Detail & Related papers (2025-07-09T05:13:09Z) - Maximally-Informative Retrieval for State Space Model Generation [59.954191072042526]
We introduce Retrieval In-Context Optimization (RICO) to minimize model uncertainty for a particular query at test-time.<n>Unlike traditional retrieval-augmented generation (RAG), which relies on externals for document retrieval, our approach leverages direct feedback from the model.<n>We show that standard top-$k$ retrieval with model gradients can approximate our optimization procedure, and provide connections to the leave-one-out loss.
arXiv Detail & Related papers (2025-06-13T18:08:54Z) - Chain-of-Retrieval Augmented Generation [72.06205327186069]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.<n>Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - Toward Optimal Search and Retrieval for RAG [39.69494982983534]
Retrieval-augmented generation (RAG) is a promising method for addressing some of the memory-related challenges associated with Large Language Models (LLMs)
Here, we work towards the goal of understanding how retrievers can be optimized for RAG pipelines for common tasks such as Question Answering (QA)
arXiv Detail & Related papers (2024-11-11T22:06:51Z) - DeepNote: Note-Centric Deep Retrieval-Augmented Generation [72.70046559930555]
Retrieval-Augmented Generation (RAG) mitigates factual errors and hallucinations in Large Language Models (LLMs) for question-answering (QA)<n>We develop DeepNote, an adaptive RAG framework that achieves in-depth and robust exploration of knowledge sources through note-centric adaptive retrieval.
arXiv Detail & Related papers (2024-10-11T14:03:29Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.<n>Our research identifies two critical latent factors affecting RAG's confidence in its predictions.<n>We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - ReFIT: Relevance Feedback from a Reranker during Inference [109.33278799999582]
Retrieve-and-rerank is a prevalent framework in neural information retrieval.
We propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time.
arXiv Detail & Related papers (2023-05-19T15:30:33Z) - Query Performance Prediction: From Ad-hoc to Conversational Search [55.37199498369387]
Query performance prediction (QPP) is a core task in information retrieval.
Research has shown the effectiveness and usefulness of QPP for ad-hoc search.
Despite its potential, QPP for conversational search has been little studied.
arXiv Detail & Related papers (2023-05-18T12:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.