LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
- URL: http://arxiv.org/abs/2309.12307v3
- Date: Fri, 8 Mar 2024 15:26:38 GMT
- Title: LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
- Authors: Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song
Han, Jiaya Jia
- Abstract summary: LongLoRA is an efficient fine-tuning approach that extends the context sizes of pre-trained large language models.
We demonstrate strong empirical results on various tasks on Llama2 models from 7B/13B to 70B.
- Score: 67.58275666573496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present LongLoRA, an efficient fine-tuning approach that extends the
context sizes of pre-trained large language models (LLMs), with limited
computation cost. Typically, training LLMs with long context sizes is
computationally expensive, requiring extensive training hours and GPU
resources. For example, training on the context length of 8192 needs 16x
computational costs in self-attention layers as that of 2048. In this paper, we
speed up the context extension of LLMs in two aspects. On the one hand,
although dense global attention is needed during inference, fine-tuning the
model can be effectively and efficiently done by sparse local attention. The
proposed shifted sparse attention effectively enables context extension,
leading to non-trivial computation saving with similar performance to
fine-tuning with vanilla attention. Particularly, it can be implemented with
only two lines of code in training, while being optional in inference. On the
other hand, we revisit the parameter-efficient fine-tuning regime for context
expansion. Notably, we find that LoRA for context extension works well under
the premise of trainable embedding and normalization. LongLoRA combines this
improved LoRA with S^2-Attn. LongLoRA demonstrates strong empirical results on
various tasks on Llama2 models from 7B/13B to 70B. LongLoRA extends Llama2 7B
from 4k context to 100k, or Llama2 70B to 32k on a single 8x A100 machine.
LongLoRA extends models' context while retaining their original architectures,
and is compatible with most existing techniques, like Flash-Attention2. In
addition, we further conduct supervised fine-tuning with LongLoRA and our long
instruction-following LongAlpaca dataset.
Related papers
- Extending Llama-3's Context Ten-Fold Overnight [23.163055795834765]
We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning.
The resulting model exhibits superior performances across a broad range of evaluation tasks.
arXiv Detail & Related papers (2024-04-30T13:25:20Z) - LLoCO: Learning Long Contexts Offline [63.3458260335454]
We introduce LLoCO, a technique that combines context compression, retrieval, and parameter-efficient finetuning using LoRA.
We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning.
arXiv Detail & Related papers (2024-04-11T17:57:22Z) - Training-Free Long-Context Scaling of Large Language Models [114.53296002607993]
We propose Dual Chunk Attention, which enables Llama2 70B to support context windows of more than 100k tokens without continual training.
By decomposing the attention for long sequences into chunk-based modules, DCA manages to effectively capture the relative positional information of tokens.
arXiv Detail & Related papers (2024-02-27T12:39:23Z) - InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory [93.20588235940453]
In this paper, we introduce a training-free memory-based method, InfLLM.
InfLLM stores distant contexts into additional memory units and employs an efficient mechanism to lookup token-relevant units for attention.
Even when the sequence length is scaled to $1,024$K, InfLLM still effectively captures long-distance dependencies.
arXiv Detail & Related papers (2024-02-07T06:50:42Z) - E^2-LLM: Efficient and Extreme Length Extension of Large Language Models [74.1254067728251]
We propose an Efficient and Extreme length extension method for Large Language Models, called E 2 -LLM, with only one training procedure and dramatically reduced cost.
Comprehensive experimental results on multiple benchmark datasets demonstrate the effectiveness of our E 2 -LLM on challenging long-context tasks.
arXiv Detail & Related papers (2024-01-13T02:11:20Z) - LongQLoRA: Efficient and Effective Method to Extend Context Length of
Large Language Models [2.4366811507669124]
LongQLoRA is a method to extend context length of large language models with less training resources.
With a single 32GB V100 GPU, LongQLoRA can extend the context length of LLaMA2 7B and 13B from 4096 to 8192 and even to 12k within 1000 finetuning steps.
LongQLoRA achieves competitive perplexity performance on PG19 and Proof-pile datasets.
arXiv Detail & Related papers (2023-11-08T18:33:06Z) - CLEX: Continuous Length Extrapolation for Large Language Models [68.43814043853347]
We propose Continuous Length EXtrapolation (CLEX) for Large Language Models (LLMs)
CLEX extends the context window to over 4x or almost 8x training length, with no deterioration in performance.
Our model trained on a 4k length exhibits competitive performance against state-of-the-art open-source models trained on context lengths up to 32k.
arXiv Detail & Related papers (2023-10-25T08:13:02Z) - LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA
Composition [46.770388457085936]
Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks.
This paper introduces LoraHub, a framework devised for the purposive assembly of LoRA modules trained on diverse given tasks.
With just a few examples from a new task, LoraHub can fluidly combine multiple LoRA modules, eliminating the need for human expertise and assumptions.
arXiv Detail & Related papers (2023-07-25T05:39:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.