ACID: Abstractive, Content-Based IDs for Document Retrieval with
Language Models
- URL: http://arxiv.org/abs/2311.08593v1
- Date: Tue, 14 Nov 2023 23:28:36 GMT
- Title: ACID: Abstractive, Content-Based IDs for Document Retrieval with
Language Models
- Authors: Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith
- Abstract summary: We introduce ACID, in which each document's ID is composed of abstractive keyphrases generated by a large language model.
We show that using ACID improves top-10 and top-20 accuracy by 15.6% and 14.4% (relative)
Our results demonstrate the effectiveness of human-readable, natural-language IDs in generative retrieval with LMs.
- Score: 69.86170930261841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative retrieval (Wang et al., 2022; Tay et al., 2022) is a new approach
for end-to-end document retrieval that directly generates document identifiers
given an input query. Techniques for designing effective, high-quality document
IDs remain largely unexplored. We introduce ACID, in which each document's ID
is composed of abstractive keyphrases generated by a large language model,
rather than an integer ID sequence as done in past work. We compare our method
with the current state-of-the-art technique for ID generation, which produces
IDs through hierarchical clustering of document embeddings. We also examine
simpler methods to generate natural-language document IDs, including the naive
approach of using the first k words of each document as its ID or words with
high BM25 scores in that document. We show that using ACID improves top-10 and
top-20 accuracy by 15.6% and 14.4% (relative) respectively versus the
state-of-the-art baseline on the MSMARCO 100k retrieval task, and 4.4% and 4.0%
respectively on the Natural Questions 100k retrieval task. Our results
demonstrate the effectiveness of human-readable, natural-language IDs in
generative retrieval with LMs. The code for reproducing our results and the
keyword-augmented datasets will be released on formal publication.
Related papers
- ProcTag: Process Tagging for Assessing the Efficacy of Document Instruction Data [28.553840579302484]
ProcTag is a data-oriented method that assesses the efficacy of document instruction data.
Experiments demonstrate that sampling existing open-sourced and generated document VQA/instruction datasets with ProcTag significantly outperforms current methods for evaluating instruction data.
arXiv Detail & Related papers (2024-07-17T07:29:59Z) - Planning Ahead in Generative Retrieval: Guiding Autoregressive Generation through Simultaneous Decoding [23.061797784952855]
This paper introduces PAG, a novel optimization and decoding approach that guides autoregressive generation of document identifiers.
Experiments on MSMARCO and TREC Deep Learning Track data reveal that PAG outperforms the state-of-the-art generative retrieval model by a large margin.
arXiv Detail & Related papers (2024-04-22T21:50:01Z) - Language Models As Semantic Indexers [78.83425357657026]
We introduce LMIndexer, a self-supervised framework to learn semantic IDs with a generative language model.
We show the high quality of the learned IDs and demonstrate their effectiveness on three tasks including recommendation, product search, and document retrieval.
arXiv Detail & Related papers (2023-10-11T18:56:15Z) - Multiview Identifiers Enhanced Generative Retrieval [78.38443356800848]
generative retrieval generates identifier strings of passages as the retrieval target.
We propose a new type of identifier, synthetic identifiers, that are generated based on the content of a passage.
Our proposed approach performs the best in generative retrieval, demonstrating its effectiveness and robustness.
arXiv Detail & Related papers (2023-05-26T06:50:21Z) - Query2doc: Query Expansion with Large Language Models [69.9707552694766]
The proposed method first generates pseudo- documents by few-shot prompting large language models (LLMs)
query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets.
Our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.
arXiv Detail & Related papers (2023-03-14T07:27:30Z) - Improving Performance of Automatic Keyword Extraction (AKE) Methods
Using PoS-Tagging and Enhanced Semantic-Awareness [8.823779489420772]
This paper proposes a simple but effective post-processing-based universal approach to improve the performance of any AKE methods.
We consider word types retrieved from a PoS-tagging step and two representative sources of semantic information.
For five state-of-the-art (SOTA) AKE methods, our experimental results with 17 selected datasets showed that the proposed approach improved their performances both consistently (up to 100% in terms of improved cases) and significantly (between 10.2% and 53.8%, with an average of 25.8%, in terms of F1-score and across all
arXiv Detail & Related papers (2022-11-09T17:04:13Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - Extracting Variable-Depth Logical Document Hierarchy from Long
Documents: Method, Evaluation, and Application [21.270184491603864]
We develop a framework, namely Hierarchy Extraction from Long Document (HELD), where we "sequentially" insert each physical object at the proper on of the current tree.
Experiments based on thousands of long documents from Chinese, English financial market and English scientific publication.
We show that logical document hierarchy can be employed to significantly improve the performance of the downstream passage retrieval task.
arXiv Detail & Related papers (2021-05-14T06:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.