Training With "Paraphrasing the Original Text'' Improves Long-Context Performance
- URL: http://arxiv.org/abs/2312.11193v8
- Date: Thu, 11 Apr 2024 03:29:20 GMT
- Title: Training With "Paraphrasing the Original Text'' Improves Long-Context Performance
- Authors: Yijiong Yu,
- Abstract summary: Large Language Models (LLMs) continue to evolve, more are being designed to handle long-context inputs.
This paper identifies the root of these issues as a deficiency in retrieval capabilities, exacerbated by the sparsity of key information in long contexts.
We introduce a novel approach called Paraphrasing the Original Text'', aimed at augmenting LLMs' proficiency in extracting information from long context.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As Large Language Models (LLMs) continue to evolve, more are being designed to handle long-context inputs. Despite this advancement, many models face challenges in achieving high precision on long-context tasks, often showing a ``lost in the middle'' issue. This paper identifies the root of these issues as a deficiency in retrieval capabilities, exacerbated by the sparsity of key information in long contexts. To tackle this challenge, we introduce a novel approach called ``Paraphrasing the Original Text'', aimed at augmenting LLMs' proficiency in extracting information from long context. This enhancement is achieved through a specialized supervised fine-tuning stage that incorporates paraphrasing information into training samples, thereby improving the model's retrieval capabilities for long-context scenarios. Testing on datasets like LongBench and NaturalQuestions Multi-document QA dataset, our method demonstrated significant improvements in managing long-context tasks, effectively addressing the ``lost in the middle'' dilemma. Specifically, we observed an average performance increase of 6.4\% and 5.9\% across these datasets, respectively. Moreover, our approach is efficient, requiring minimal overhead with fine-tuning needed on just 19k samples. The model and training data have been made available on HuggingFace(https://huggingface.co/yuyijiong/Qwen-14b-chat-yarn-32k).
Related papers
- KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches [49.43759617227999]
Long context capability is a crucial competency for large language models (LLMs)
This work provides a taxonomy of current methods and evaluating 10+ state-of-the-art approaches across seven categories of long context tasks.
arXiv Detail & Related papers (2024-07-01T17:59:47Z) - Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning [68.43706033424378]
This study introduces an innovative method designed to increase in-context text length in large language models (MLLMs) efficiently.
We present Visualized In-Context Text Processing (VisInContext), which processes long in-context text using visual tokens.
This technique significantly reduces GPU memory usage and floating point operations (FLOPs) for both training and inferenceing stage.
arXiv Detail & Related papers (2024-06-04T17:59:25Z) - LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models [61.12177317970258]
LongSkywork is a long-context Large Language Model capable of processing up to 200,000 tokens.
We develop two novel methods for creating synthetic data.
LongSkywork achieves outstanding performance on a variety of long-context benchmarks.
arXiv Detail & Related papers (2024-06-02T03:34:41Z) - Quest: Query-centric Data Synthesis Approach for Long-context Scaling of Large Language Model [22.07414287186125]
We propose a Query-centric data synthesis method, abbreviated as Quest.
We synthesize a long-context dataset up to 128k context length, significantly outperforming other data synthesis methods on multiple long-context benchmark datasets.
arXiv Detail & Related papers (2024-05-30T08:50:55Z) - Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models [13.091271774417867]
Long-context modeling capabilities are important for large language models (LLMs) in various applications.
We propose a data mining framework textbfProLong that can assign each training sample with a long dependency score.
Comprehensive experiments on multiple benchmarks indicate that ProLong effectively identifies documents that carry long dependencies.
arXiv Detail & Related papers (2024-05-28T07:36:56Z) - Long Context Alignment with Short Instructions and Synthesized Positions [56.1267385315404]
This paper introduces Step-Skipping Alignment (SkipAlign)
It is a new technique designed to enhance the long-context capabilities of Large Language Models (LLMs)
With a careful selection of the base model and alignment datasets, SkipAlign with only 6B parameters achieves it's best performance and comparable with strong baselines like GPT-3.5-Turbo-16K on LongBench.
arXiv Detail & Related papers (2024-05-07T01:56:22Z) - Never Lost in the Middle: Improving Large Language Models via Attention
Strengthening Question Answering [0.14043931310479374]
Large language models (LLMs) are struggling to seek correct information in long contexts.
This paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks.
Experimental results show substantial improvement in Multi-doc QA and other benchmarks, superior to state-of-the-art models by 13.7% absolute gain in shuffled settings.
arXiv Detail & Related papers (2023-11-15T18:42:44Z) - Effective Long-Context Scaling of Foundation Models [90.57254298730923]
We present a series of long-context LLMs that support effective context windows of up to 32,768 tokens.
Our models achieve consistent improvements on most regular tasks and significant improvements on long-context tasks over Llama 2.
arXiv Detail & Related papers (2023-09-27T21:41:49Z) - Augmenting Data for Sarcasm Detection with Unlabeled Conversation
Context [55.898436183096614]
We present a novel data augmentation technique, CRA (Contextual Response Augmentation), which utilizes conversational context to generate meaningful samples for training.
Specifically, our proposed model, trained with the proposed data augmentation technique, participated in the sarcasm detection task of FigLang2020, have won and achieves the best performance in both Reddit and Twitter datasets.
arXiv Detail & Related papers (2020-06-11T09:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.