Transformer Memory as a Differentiable Search Index
- URL: http://arxiv.org/abs/2202.06991v2
- Date: Wed, 16 Feb 2022 09:05:59 GMT
- Title: Transformer Memory as a Differentiable Search Index
- Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh
Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W.
Cohen, Donald Metzler
- Abstract summary: We introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids.
We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes.
- Score: 102.41278496436948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we demonstrate that information retrieval can be accomplished
with a single Transformer, in which all information about the corpus is encoded
in the parameters of the model. To this end, we introduce the Differentiable
Search Index (DSI), a new paradigm that learns a text-to-text model that maps
string queries directly to relevant docids; in other words, a DSI model answers
queries directly using only its parameters, dramatically simplifying the whole
retrieval process. We study variations in how documents and their identifiers
are represented, variations in training procedures, and the interplay between
models and corpus sizes. Experiments demonstrate that given appropriate design
choices, DSI significantly outperforms strong baselines such as dual encoder
models. Moreover, DSI demonstrates strong generalization capabilities,
outperforming a BM25 baseline in a zero-shot setup.
Related papers
- De-DSI: Decentralised Differentiable Search Index [0.0]
De-DSI is a framework that fuses large language models with genuine decentralization for information retrieval.
It uses the differentiable search index (DSI) concept in a decentralized setting to efficiently connect novel user queries with document identifiers.
arXiv Detail & Related papers (2024-04-18T14:51:55Z) - How Does Generative Retrieval Scale to Millions of Passages? [68.98628807288972]
We conduct the first empirical study of generative retrieval techniques across various corpus scales.
We scale generative retrieval to millions of passages with a corpus of 8.8M passages and evaluating model sizes up to 11B parameters.
While generative retrieval is competitive with state-of-the-art dual encoders on small corpora, scaling to millions of passages remains an important and unsolved challenge.
arXiv Detail & Related papers (2023-05-19T17:33:38Z) - DSI++: Updating Transformer Memory with New Documents [95.70264288158766]
We introduce DSI++, a continual learning challenge for DSI to incrementally index new documents.
We show that continual indexing of new documents leads to considerable forgetting of previously indexed documents.
We introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task.
arXiv Detail & Related papers (2022-12-19T18:59:34Z) - Bridging the Gap Between Indexing and Retrieval for Differentiable
Search Index with Query Generation [98.02743096197402]
Differentiable Search Index (DSI) is an emerging paradigm for information retrieval.
We propose a simple yet effective indexing framework for DSI, called DSI-QG.
Empirical results on popular mono-lingual and cross-lingual passage retrieval datasets show that DSI-QG significantly outperforms the original DSI model.
arXiv Detail & Related papers (2022-06-21T06:21:23Z) - UnifieR: A Unified Retriever for Large-Scale Retrieval [84.61239936314597]
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
Recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms.
We propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability.
arXiv Detail & Related papers (2022-05-23T11:01:59Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - Coarse-to-Fine Memory Matching for Joint Retrieval and Classification [0.7081604594416339]
We present a novel end-to-end language model for joint retrieval and classification.
We evaluate it on the standard blind test set of the FEVER fact verification dataset.
We extend exemplar auditing to this setting for analyzing and constraining the model.
arXiv Detail & Related papers (2020-11-29T05:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.