RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective
Augmentation
- URL: http://arxiv.org/abs/2310.04408v1
- Date: Fri, 6 Oct 2023 17:55:36 GMT
- Title: RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective
Augmentation
- Authors: Fangyuan Xu, Weijia Shi, Eunsol Choi
- Abstract summary: We propose compressing retrieved documents into textual summaries prior to in-context integration.
This not only reduces the computational costs but also relieves the burden of LMs to identify relevant information in long retrieved documents.
We show that our compressors trained for one LM can transfer to other LMs on the language modeling task and provide summaries largely faithful to the retrieved documents.
- Score: 61.53695868960846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retrieving documents and prepending them in-context at inference time
improves performance of language model (LMs) on a wide range of tasks. However,
these documents, often spanning hundreds of words, make inference substantially
more expensive. We propose compressing the retrieved documents into textual
summaries prior to in-context integration. This not only reduces the
computational costs but also relieves the burden of LMs to identify relevant
information in long retrieved documents. We present two compressors -- an
extractive compressor which selects useful sentences from retrieved documents
and an abstractive compressor which generates summaries by synthesizing
information from multiple documents. Both compressors are trained to improve
LMs' performance on end tasks when the generated summaries are prepended to the
LMs' input, while keeping the summary concise.If the retrieved documents are
irrelevant to the input or offer no additional information to LM, our
compressor can return an empty string, implementing selective augmentation.We
evaluate our approach on language modeling task and open domain question
answering task. We achieve a compression rate of as low as 6% with minimal loss
in performance for both tasks, significantly outperforming the off-the-shelf
summarization models. We show that our compressors trained for one LM can
transfer to other LMs on the language modeling task and provide summaries
largely faithful to the retrieved documents.
Related papers
- Scaling Multi-Document Event Summarization: Evaluating Compression vs. Full-Text Approaches [5.856976164399712]
We contrast two classes of systems for large-scale multi-document summarization (MDS): compression and full-text.
Full-text methods promise a lossless summary by relying on recent advances in long-context reasoning.
We show that compression-based methods show strong promise at intermediate stages, even outperforming full-context.
arXiv Detail & Related papers (2025-02-10T16:15:08Z) - Efficient Long Context Language Model Retrieval with Compression [57.09163579304332]
Long Context Language Models (LCLMs) have emerged as a new paradigm to perform Information Retrieval (IR)
We propose a new compression approach tailored for LCLM retrieval, which is trained to maximize the retrieval performance while minimizing the length of the compressed passages.
We show that CoLoR improves the retrieval performance by 6% while compressing the in-context size by a factor of 1.91.
arXiv Detail & Related papers (2024-12-24T07:30:55Z) - BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning via Compression [91.23933111083389]
Retrieval-augmented generation (RAG) can supplement large language models (LLMs) by integrating external knowledge.
This paper presents BRIEF, a lightweight approach that performs query-aware multi-hop reasoning.
Based on our synthetic data built entirely by open-source models, BRIEF generates more concise summaries.
arXiv Detail & Related papers (2024-10-20T04:24:16Z) - AdaComp: Extractive Context Compression with Adaptive Predictor for Retrieval-Augmented Large Language Models [15.887617654762629]
Retrieved documents containing noise will hinder RAG from detecting answer clues and make the inference process slow and expensive.
We introduce AdaComp, a low-cost extractive context compression method that adaptively determines the compression rate based on both query complexity and retrieval quality.
arXiv Detail & Related papers (2024-09-03T03:25:59Z) - CompAct: Compressing Retrieved Documents Actively for Question Answering [15.585833125854418]
CompAct is a novel framework that employs an active strategy to condense extensive documents without losing key information.
Our experiments demonstrate that CompAct brings significant improvements in both performance and compression rate on multi-hop question-answering benchmarks.
arXiv Detail & Related papers (2024-07-12T06:06:54Z) - CaLM: Contrasting Large and Small Language Models to Verify Grounded Generation [76.31621715032558]
Grounded generation aims to equip language models (LMs) with the ability to produce more credible and accountable responses.
We introduce CaLM, a novel verification framework.
Our framework empowers smaller LMs, which rely less on parametric memory, to validate the output of larger LMs.
arXiv Detail & Related papers (2024-06-08T06:04:55Z) - Retrieval-Pretrained Transformer: Long-range Language Modeling with Self-retrieval [51.437420003471615]
We propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch.
RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
arXiv Detail & Related papers (2023-06-23T10:18:02Z) - Adapting Language Models to Compress Contexts [71.98287002918941]
Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window.
We propose to adapt pre-trained LMs into AutoCompressors, which are capable of compressing long contexts into compact summary vectors.
We fine-tune OPT and Llama-2 models on sequences of up to 30,720 tokens and show that AutoCompressors can utilize long contexts to improve perplexity.
arXiv Detail & Related papers (2023-05-24T06:42:44Z) - Semantic Compression With Large Language Models [1.0874100424278175]
Large language models (LLMs) are revolutionizing information retrieval, question answering, summarization, and code generation tasks.
LLMs are inherently limited by the number of input and output tokens that can be processed at once.
This paper presents three contributions to research on LLMs.
arXiv Detail & Related papers (2023-04-25T01:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.