LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
- URL: http://arxiv.org/abs/2402.13753v1
- Date: Wed, 21 Feb 2024 12:30:33 GMT
- Title: LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
- Authors: Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning
Shang, Jiahang Xu, Fan Yang, Mao Yang
- Abstract summary: Current extended context windows are limited to around 128k tokens.
LongRoPE extends the context window of pre-trained LLMs to an impressive 2048k tokens.
- Score: 7.833740464264734
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large context window is a desirable feature in large language models (LLMs).
However, due to high fine-tuning costs, scarcity of long texts, and
catastrophic values introduced by new token positions, current extended context
windows are limited to around 128k tokens. This paper introduces LongRoPE that,
for the first time, extends the context window of pre-trained LLMs to an
impressive 2048k tokens, with up to only 1k fine-tuning steps at within 256k
training lengths, while maintaining performance at the original short context
window. This is achieved by three key innovations: (i) we identify and exploit
two forms of non-uniformities in positional interpolation through an efficient
search, providing a better initialization for fine-tuning and enabling an 8x
extension in non-fine-tuning scenarios; (ii) we introduce a progressive
extension strategy that first fine-tunes a 256k length LLM and then conducts a
second positional interpolation on the fine-tuned extended LLM to achieve a
2048k context window; (iii) we readjust LongRoPE on 8k length to recover the
short context window performance. Extensive experiments on LLaMA2 and Mistral
across various tasks demonstrate the effectiveness of our method. Models
extended via LongRoPE retain the original architecture with minor modifications
to the positional embedding, and can reuse most pre-existing optimizations.
Related papers
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models [72.71150585370147]
LongRecipe is an efficient training strategy for extending the context window of large language models.
It simulates long-sequence inputs while maintaining training efficiency and significantly improves the model's understanding of long-range dependencies.
LongRecipe can utilize long sequences while requiring only 30% of the target context window size, and reduces computational training resource over 85% compared to full sequence training.
arXiv Detail & Related papers (2024-08-31T17:19:30Z) - ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities [53.97515452727115]
ChatQA 2 is a Llama 3.0-based model with a 128K context window.
We present a training recipe to extend the context window of Llama3-70B-base from 8K to 128K tokens.
Our results demonstrate that the Llama3-ChatQA-2-70B model outperforms most existing state-of-the-art models.
arXiv Detail & Related papers (2024-07-19T17:35:47Z) - LongEmbed: Extending Embedding Models for Long Context Retrieval [87.60404151086715]
This paper explores context window extension of embedding models, pushing the limit to 32k without requiring additional training.
First, we examine the performance of current embedding models for long context retrieval on our newly constructed LongEmbed benchmark.
Experiments show that training-free context window extension strategies like positionRo can effectively extend the context window of existing embedding models by several folds.
arXiv Detail & Related papers (2024-04-18T11:29:23Z) - XL$^2$Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies [45.31042312867939]
Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks but are constrained by their small context window sizes.
Various efforts have been proposed to expand the context window to accommodate even up to 200K input tokens.
We introduce a benchmark for extremely long context understanding with long-range dependencies, XL$2$Bench.
arXiv Detail & Related papers (2024-04-08T12:29:07Z) - Extending LLMs' Context Window with 100 Samples [42.52554295241792]
Large Language Models (LLMs) are known to have limited extrapolation ability beyond their pre-trained context window.
Recent studies have sought to extend the context window by modifying rotary position embedding (RoPE)
We introduce a novel extension to RoPE which combines adjusting RoPE's base frequency and scaling the attention logits to help LLMs efficiently adapt to a larger context window.
arXiv Detail & Related papers (2024-01-13T07:57:01Z) - Retrieval meets Long Context Large Language Models [59.431200671427064]
Extending context window of large language models (LLMs) is getting popular recently.
Retrieval-augmentation versus long context window, which one is better for downstream tasks?
Can both methods be combined to get the best of both worlds?
Our best model, retrieval-augmented Llama2-70B with 32K context window, outperforms GPT-3.5-turbo-16k and Davinci003 in terms of average score on nine long context tasks.
arXiv Detail & Related papers (2023-10-04T17:59:41Z) - PoSE: Efficient Context Window Extension of LLMs via Positional
Skip-wise Training [91.99700930388998]
We propose Positional Skip-wisE training that simulates long inputs using a fixed context window.
PoSE greatly reduces memory and time overhead compared with Full-length fine-tuning.
We have successfully extended the LLaMA model to 128k tokens using a 2k training context window.
arXiv Detail & Related papers (2023-09-19T08:03:38Z) - Parallel Context Windows for Large Language Models [52.965170346907904]
We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training.
Our main results test the PCW approach on in-context learning with models that range in size between 750 million and 178 billion parameters.
We show additional benefits in other settings where long context windows may be beneficial: multi-hop questions and retrieval-augmented question answering with multiple retrieved documents.
arXiv Detail & Related papers (2022-12-21T11:38:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.