Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable
Augmentation with Query Extraction and Generation
- URL: http://arxiv.org/abs/2212.08841v1
- Date: Sat, 17 Dec 2022 10:43:25 GMT
- Title: Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable
Augmentation with Query Extraction and Generation
- Authors: Rui Meng, Ye Liu, Semih Yavuz, Divyansh Agarwal, Lifu Tu, Ning Yu,
Jianguo Zhang, Meghana Bhat, Yingbo Zhou
- Abstract summary: We explore two categories of methods for creating pseudo query-document pairs, named query extraction (QExt) and transferred query generation (TQGen)
QExt extracts pseudo queries by document structures or selecting salient random spans, and TQGen utilizes generation models trained for other NLP tasks.
Experiments show that dense retrievers trained with individual augmentation methods can perform comparably well with multiple strong baselines.
- Score: 27.391814046104646
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Dense retrievers have made significant strides in obtaining state-of-the-art
results on text retrieval and open-domain question answering (ODQA). Yet most
of these achievements were made possible with the help of large annotated
datasets, unsupervised learning for dense retrieval models remains an open
problem. In this work, we explore two categories of methods for creating pseudo
query-document pairs, named query extraction (QExt) and transferred query
generation (TQGen), to augment the retriever training in an annotation-free and
scalable manner. Specifically, QExt extracts pseudo queries by document
structures or selecting salient random spans, and TQGen utilizes generation
models trained for other NLP tasks (e.g., summarization) to produce pseudo
queries. Extensive experiments show that dense retrievers trained with
individual augmentation methods can perform comparably well with multiple
strong baselines, and combining them leads to further improvements, achieving
state-of-the-art performance of unsupervised dense retrieval on both BEIR and
ODQA datasets.
Related papers
- Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Search [65.53881294642451]
Deliberate Thinking based Dense Retriever (DEBATER)
DEBATER enhances recent dense retrievers by enabling them to learn more effective document representations through a step-by-step thinking process.
Experimental results show that DEBATER significantly outperforms existing methods across several retrieval benchmarks.
arXiv Detail & Related papers (2025-02-18T15:56:34Z) - Reinforced Information Retrieval [35.0424269986952]
We present textbfReinforced-IR, a novel approach that jointly adapts a pre-trained retriever and generator for precise cross-domain retrieval.
A key innovation of Reinforced-IR is its textbfSelf-Boosting framework, which enables retriever and generator to learn from each other's feedback.
In our experiment, Reinforced-IR outperforms existing domain adaptation methods by a large margin, leading to substantial improvements in retrieval quality across a wide range of application scenarios.
arXiv Detail & Related papers (2025-02-17T08:52:39Z) - Chain-of-Retrieval Augmented Generation [72.06205327186069]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.
Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - Unsupervised Query Routing for Retrieval Augmented Generation [64.47987041500966]
We introduce a novel unsupervised method that constructs the "upper-bound" response to evaluate the quality of retrieval-augmented responses.
This evaluation enables the decision of the most suitable search engine for a given query.
By eliminating manual annotations, our approach can automatically process large-scale real user queries and create training data.
arXiv Detail & Related papers (2025-01-14T02:27:06Z) - MBA-RAG: a Bandit Approach for Adaptive Retrieval-Augmented Generation through Question Complexity [30.346398341996476]
We propose a reinforcement learning-based framework that dynamically selects the most suitable retrieval strategy based on query complexity.
Our method achieves new state of the art results on multiple single-hop and multi-hop datasets while reducing retrieval costs.
arXiv Detail & Related papers (2024-12-02T14:55:02Z) - Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers [0.0]
Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a private knowledge base of documents with Large Language Models (LLM) to build Generative Q&A (Question-Answering) systems.
We propose the 'Blended RAG' method of leveraging semantic search techniques, such as Vector indexes and Sparse indexes, blended with hybrid query strategies.
Our study achieves better retrieval results and sets new benchmarks for IR (Information Retrieval) datasets like NQ and TREC-COVID datasets.
arXiv Detail & Related papers (2024-03-22T17:13:46Z) - Corrective Retrieval Augmented Generation [36.04062963574603]
Retrieval-augmented generation (RAG) relies heavily on relevance of retrieved documents, raising concerns about how the model behaves if retrieval goes wrong.
We propose the Corrective Retrieval Augmented Generation (CRAG) to improve the robustness of generation.
CRAG is plug-and-play and can be seamlessly coupled with various RAG-based approaches.
arXiv Detail & Related papers (2024-01-29T04:36:39Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Revisiting Sparse Retrieval for Few-shot Entity Linking [33.15662306409253]
We propose an ELECTRA-based keyword extractor to denoise the mention context and construct a better query expression.
For training the extractor, we propose a distant supervision method to automatically generate training data based on overlapping tokens between mention contexts and entity descriptions.
Experimental results on the ZESHEL dataset demonstrate that the proposed method outperforms state-of-the-art models by a significant margin across all test domains.
arXiv Detail & Related papers (2023-10-19T03:51:10Z) - Query2doc: Query Expansion with Large Language Models [69.9707552694766]
The proposed method first generates pseudo- documents by few-shot prompting large language models (LLMs)
query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets.
Our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.
arXiv Detail & Related papers (2023-03-14T07:27:30Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - GQE-PRF: Generative Query Expansion with Pseudo-Relevance Feedback [8.142861977776256]
We propose a novel approach which effectively integrates text generation models into PRF-based query expansion.
Our approach generates augmented query terms via neural text generation models conditioned on both the initial query and pseudo-relevance feedback.
We evaluate the performance of our approach on information retrieval tasks using two benchmark datasets.
arXiv Detail & Related papers (2021-08-13T01:09:02Z) - Generation-Augmented Retrieval for Open-domain Question Answering [134.27768711201202]
Generation-Augmented Retrieval (GAR) for answering open-domain questions.
We show that generating diverse contexts for a query is beneficial as fusing their results consistently yields better retrieval accuracy.
GAR achieves state-of-the-art performance on Natural Questions and TriviaQA datasets under the extractive QA setup when equipped with an extractive reader.
arXiv Detail & Related papers (2020-09-17T23:08:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.