How Does Generative Retrieval Scale to Millions of Passages?
- URL: http://arxiv.org/abs/2305.11841v1
- Date: Fri, 19 May 2023 17:33:38 GMT
- Title: How Does Generative Retrieval Scale to Millions of Passages?
- Authors: Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang,
Jimmy Lin, Donald Metzler, Vinh Q. Tran
- Abstract summary: We conduct the first empirical study of generative retrieval techniques across various corpus scales.
We scale generative retrieval to millions of passages with a corpus of 8.8M passages and evaluating model sizes up to 11B parameters.
While generative retrieval is competitive with state-of-the-art dual encoders on small corpora, scaling to millions of passages remains an important and unsolved challenge.
- Score: 68.98628807288972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Popularized by the Differentiable Search Index, the emerging paradigm of
generative retrieval re-frames the classic information retrieval problem into a
sequence-to-sequence modeling task, forgoing external indices and encoding an
entire document corpus within a single Transformer. Although many different
approaches have been proposed to improve the effectiveness of generative
retrieval, they have only been evaluated on document corpora on the order of
100k in size. We conduct the first empirical study of generative retrieval
techniques across various corpus scales, ultimately scaling up to the entire MS
MARCO passage ranking task with a corpus of 8.8M passages and evaluating model
sizes up to 11B parameters. We uncover several findings about scaling
generative retrieval to millions of passages; notably, the central importance
of using synthetic queries as document representations during indexing, the
ineffectiveness of existing proposed architecture modifications when accounting
for compute cost, and the limits of naively scaling model parameters with
respect to retrieval performance. While we find that generative retrieval is
competitive with state-of-the-art dual encoders on small corpora, scaling to
millions of passages remains an important and unsolved challenge. We believe
these findings will be valuable for the community to clarify the current state
of generative retrieval, highlight the unique challenges, and inspire new
research directions.
Related papers
- State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era [59.279784235147254]
This survey provides an in-depth summary of the latest approaches that are based on recurrent models for sequential data processing.
The emerging picture suggests that there is room for thinking of novel routes, constituted by learning algorithms which depart from the standard Backpropagation Through Time.
arXiv Detail & Related papers (2024-06-13T12:51:22Z) - List-aware Reranking-Truncation Joint Model for Search and
Retrieval-augmented Generation [80.12531449946655]
We propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently.
GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture.
Our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs.
arXiv Detail & Related papers (2024-02-05T06:52:53Z) - Corrective Retrieval Augmented Generation [36.04062963574603]
Retrieval-augmented generation (RAG) relies heavily on relevance of retrieved documents, raising concerns about how the model behaves if retrieval goes wrong.
We propose the Corrective Retrieval Augmented Generation (CRAG) to improve the robustness of generation.
CRAG is plug-and-play and can be seamlessly coupled with various RAG-based approaches.
arXiv Detail & Related papers (2024-01-29T04:36:39Z) - GAR-meets-RAG Paradigm for Zero-Shot Information Retrieval [16.369071865207808]
We propose a novel GAR-meets-RAG recurrence formulation that overcomes the challenges of existing paradigms.
A key design principle is that the rewrite-retrieval stages improve the recall of the system and a final re-ranking stage improves the precision.
Our method establishes a new state-of-the-art in the BEIR benchmark, outperforming previous best results in Recall@100 and nDCG@10 metrics on 6 out of 8 datasets.
arXiv Detail & Related papers (2023-10-31T03:52:08Z) - MGAS: Multi-Granularity Architecture Search for Trade-Off Between Model
Effectiveness and Efficiency [10.641875933652647]
We introduce multi-granularity architecture search (MGAS) to discover both effective and efficient neural networks.
We learn discretization functions specific to each granularity level to adaptively determine the unit remaining ratio according to the evolving architecture.
Extensive experiments on CIFAR-10, CIFAR-100 and ImageNet demonstrate that MGAS outperforms other state-of-the-art methods in achieving a better trade-off between model performance and model size.
arXiv Detail & Related papers (2023-10-23T16:32:18Z) - Enhancing Documents with Multidimensional Relevance Statements in
Cross-encoder Re-ranking [2.2691623651741]
We propose a novel approach to consider multiple dimensions of relevance beyond topicality in cross-encoder re-ranking.
Our results show that the proposed approach statistically outperforms both aggregation-based and cross-encoder re-rankers.
arXiv Detail & Related papers (2023-06-19T14:37:26Z) - DSI++: Updating Transformer Memory with New Documents [95.70264288158766]
We introduce DSI++, a continual learning challenge for DSI to incrementally index new documents.
We show that continual indexing of new documents leads to considerable forgetting of previously indexed documents.
We introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task.
arXiv Detail & Related papers (2022-12-19T18:59:34Z) - CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks [62.22920673080208]
Single-step generative model can dramatically simplify the search process and be optimized in end-to-end manner.
We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index.
arXiv Detail & Related papers (2022-08-16T10:22:49Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.