ReSLLM: Large Language Models are Strong Resource Selectors for
Federated Search
- URL: http://arxiv.org/abs/2401.17645v1
- Date: Wed, 31 Jan 2024 07:58:54 GMT
- Title: ReSLLM: Large Language Models are Strong Resource Selectors for
Federated Search
- Authors: Shuai Wang, Shengyao Zhuang, Bevan Koopman, Guido Zuccon
- Abstract summary: Federated search will become increasingly pivotal in the context of Retrieval-Augmented Generation pipelines.
Current SOTA resource selection methodologies rely on feature-based learning approaches.
We propose ReSLLM to drive the selection of resources in federated search in a zero-shot setting.
- Score: 35.44746116088232
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated search, which involves integrating results from multiple
independent search engines, will become increasingly pivotal in the context of
Retrieval-Augmented Generation pipelines empowering LLM-based applications such
as chatbots. These systems often distribute queries among various search
engines, ranging from specialized (e.g., PubMed) to general (e.g., Google),
based on the nature of user utterances. A critical aspect of federated search
is resource selection - the selection of appropriate resources prior to issuing
the query to ensure high-quality and rapid responses, and contain costs
associated with calling the external search engines. However, current SOTA
resource selection methodologies primarily rely on feature-based learning
approaches. These methods often involve the labour intensive and expensive
creation of training labels for each resource. In contrast, LLMs have exhibited
strong effectiveness as zero-shot methods across NLP and IR tasks. We
hypothesise that in the context of federated search LLMs can assess the
relevance of resources without the need for extensive predefined labels or
features. In this paper, we propose ReSLLM. Our ReSLLM method exploits LLMs to
drive the selection of resources in federated search in a zero-shot setting. In
addition, we devise an unsupervised fine tuning protocol, the Synthetic Label
Augmentation Tuning (SLAT), where the relevance of previously logged queries
and snippets from resources is predicted using an off-the-shelf LLM and then in
turn used to fine-tune ReSLLM with respect to resource selection. Our empirical
evaluation and analysis details the factors influencing the effectiveness of
LLMs in this context. The results showcase the merits of ReSLLM for resource
selection: not only competitive effectiveness in the zero-shot setting, but
also obtaining large when fine-tuned using SLAT-protocol.
Related papers
- Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation [43.630437906898635]
We propose a novel two-stage fine-tuning architecture called Invar-RAG.
In the retrieval stage, an LLM-based retriever is constructed by integrating LoRA-based representation learning.
In the generation stage, a refined fine-tuning method is employed to improve LLM accuracy in generating answers based on retrieved information.
arXiv Detail & Related papers (2024-11-11T14:25:37Z) - RuAG: Learned-rule-augmented Generation for Large Language Models [62.64389390179651]
We propose a novel framework, RuAG, to automatically distill large volumes of offline data into interpretable first-order logic rules.
We evaluate our framework on public and private industrial tasks, including natural language processing, time-series, decision-making, and industrial tasks.
arXiv Detail & Related papers (2024-11-04T00:01:34Z) - CHIQ: Contextual History Enhancement for Improving Query Rewriting in Conversational Search [67.6104548484555]
We introduce CHIQ, a two-step method that leverages the capabilities of open-source large language models (LLMs) to resolve ambiguities in the conversation history before query rewriting.
We demonstrate on five well-established benchmarks that CHIQ leads to state-of-the-art results across most settings.
arXiv Detail & Related papers (2024-06-07T15:23:53Z) - REQUAL-LM: Reliability and Equity through Aggregation in Large Language Models [10.684722193666607]
We introduce REQUAL-LM, a novel method for finding reliable and equitable large language models (LLMs) outputs through aggregation.
Specifically, we develop a Monte Carlo method based on repeated sampling to find a reliable output close to the mean of the underlying distribution of possible outputs.
We formally define the terms such as reliability and bias, and design an equity-aware aggregation to minimize harmful bias while finding a highly reliable output.
arXiv Detail & Related papers (2024-04-17T22:12:41Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts? [45.233517779029334]
We identify whether responses are attributed to generated or retrieved contexts.
Experiments reveal a significant bias in several LLMs to favor generated contexts, even when they provide incorrect information.
arXiv Detail & Related papers (2024-01-22T12:54:04Z) - LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient
Querying [71.86163159193327]
Large language models (LLMs) have recently demonstrated their impressive ability to provide context-aware responses via text.
This ability could potentially be used to predict plausible solutions in sequential decision making tasks pertaining to pattern completion.
We introduce LaGR, which uses this predictive ability of LLMs to propose solutions to tasks that have been partially completed by a primary reinforcement learning (RL) agent.
arXiv Detail & Related papers (2023-08-21T02:07:35Z) - PALR: Personalization Aware LLMs for Recommendation [7.407353565043918]
PALR aims to combine user history behaviors (such as clicks, purchases, ratings, etc.) with large language models (LLMs) to generate user preferred items.
Our solution outperforms state-of-the-art models on various sequential recommendation tasks.
arXiv Detail & Related papers (2023-05-12T17:21:33Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z) - Information Extraction in Low-Resource Scenarios: Survey and Perspective [56.5556523013924]
Information Extraction seeks to derive structured information from unstructured texts.
This paper presents a review of neural approaches to low-resource IE from emphtraditional and emphLLM-based perspectives.
arXiv Detail & Related papers (2022-02-16T13:44:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.