RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective
Augmentation
- URL: http://arxiv.org/abs/2310.04408v1
- Date: Fri, 6 Oct 2023 17:55:36 GMT
- Title: RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective
Augmentation
- Authors: Fangyuan Xu, Weijia Shi, Eunsol Choi
- Abstract summary: We propose compressing retrieved documents into textual summaries prior to in-context integration.
This not only reduces the computational costs but also relieves the burden of LMs to identify relevant information in long retrieved documents.
We show that our compressors trained for one LM can transfer to other LMs on the language modeling task and provide summaries largely faithful to the retrieved documents.
- Score: 61.53695868960846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retrieving documents and prepending them in-context at inference time
improves performance of language model (LMs) on a wide range of tasks. However,
these documents, often spanning hundreds of words, make inference substantially
more expensive. We propose compressing the retrieved documents into textual
summaries prior to in-context integration. This not only reduces the
computational costs but also relieves the burden of LMs to identify relevant
information in long retrieved documents. We present two compressors -- an
extractive compressor which selects useful sentences from retrieved documents
and an abstractive compressor which generates summaries by synthesizing
information from multiple documents. Both compressors are trained to improve
LMs' performance on end tasks when the generated summaries are prepended to the
LMs' input, while keeping the summary concise.If the retrieved documents are
irrelevant to the input or offer no additional information to LM, our
compressor can return an empty string, implementing selective augmentation.We
evaluate our approach on language modeling task and open domain question
answering task. We achieve a compression rate of as low as 6% with minimal loss
in performance for both tasks, significantly outperforming the off-the-shelf
summarization models. We show that our compressors trained for one LM can
transfer to other LMs on the language modeling task and provide summaries
largely faithful to the retrieved documents.
Related papers
- Two are better than one: Context window extension with multi-grained self-injection [111.1376461868317]
SharedLLM is a novel approach grounded in the design philosophy of multi-grained context compression and query-aware information retrieval.
We introduce a specialized tree-style data structure to efficiently encode, store and retrieve multi-grained contextual information for text chunks.
arXiv Detail & Related papers (2024-10-25T06:08:59Z) - BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning via Compression [91.23933111083389]
BRIEF (Bridging Retrieval and Inference through Evidence Fusion) is a lightweight approach that performs query-aware multi-hop reasoning.
Based on our synthetic data built entirely by open-source models, BRIEF generates more concise summaries.
arXiv Detail & Related papers (2024-10-20T04:24:16Z) - AdaComp: Extractive Context Compression with Adaptive Predictor for Retrieval-Augmented Large Language Models [15.887617654762629]
Retrieved documents containing noise will hinder RAG from detecting answer clues and make the inference process slow and expensive.
We introduce AdaComp, a low-cost extractive context compression method that adaptively determines the compression rate based on both query complexity and retrieval quality.
arXiv Detail & Related papers (2024-09-03T03:25:59Z) - CompAct: Compressing Retrieved Documents Actively for Question Answering [15.585833125854418]
CompAct is a novel framework that employs an active strategy to condense extensive documents without losing key information.
Our experiments demonstrate that CompAct brings significant improvements in both performance and compression rate on multi-hop question-answering benchmarks.
arXiv Detail & Related papers (2024-07-12T06:06:54Z) - CaLM: Contrasting Large and Small Language Models to Verify Grounded Generation [76.31621715032558]
Grounded generation aims to equip language models (LMs) with the ability to produce more credible and accountable responses.
We introduce CaLM, a novel verification framework.
Our framework empowers smaller LMs, which rely less on parametric memory, to validate the output of larger LMs.
arXiv Detail & Related papers (2024-06-08T06:04:55Z) - R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models [32.598670876662375]
Retrieval-augmented large language models (LLMs) leverage relevant content retrieved by information retrieval systems to generate correct responses.
Existing retriever-responder methods typically append relevant documents to the prompt of LLMs to perform text generation tasks.
We propose a new pipeline named "Reinforced Retriever-Reorder-Responder" to learn document orderings for retrieval-augmented LLMs.
arXiv Detail & Related papers (2024-05-04T12:59:10Z) - Retrieval-Pretrained Transformer: Long-range Language Modeling with Self-retrieval [51.437420003471615]
We propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch.
RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
arXiv Detail & Related papers (2023-06-23T10:18:02Z) - Adapting Language Models to Compress Contexts [71.98287002918941]
Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window.
We propose to adapt pre-trained LMs into AutoCompressors, which are capable of compressing long contexts into compact summary vectors.
We fine-tune OPT and Llama-2 models on sequences of up to 30,720 tokens and show that AutoCompressors can utilize long contexts to improve perplexity.
arXiv Detail & Related papers (2023-05-24T06:42:44Z) - Semantic Compression With Large Language Models [1.0874100424278175]
Large language models (LLMs) are revolutionizing information retrieval, question answering, summarization, and code generation tasks.
LLMs are inherently limited by the number of input and output tokens that can be processed at once.
This paper presents three contributions to research on LLMs.
arXiv Detail & Related papers (2023-04-25T01:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.