ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a
Single GPU
- URL: http://arxiv.org/abs/2312.02515v1
- Date: Tue, 5 Dec 2023 05:38:38 GMT
- Title: ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a
Single GPU
- Authors: Zhengmao Ye and Dengchun Li and Jingqi Tian and Tingfeng Lan and Jie
Zuo and Lei Duan and Hui Lu and Yexi Jiang and Jian Sha and Ke Zhang and
Mingjie Tang
- Abstract summary: We present ASPEN, a framework for fine-tuning transformer-based large language models (LLMs)
ASPEN efficiently trains multiple jobs on a single GPU using the LoRA method, leveraging shared pre-trained model and adaptive scheduling.
Experiments show that ASPEN saves 53% of GPU memory when training multiple LLaMA-7B models on NVIDIA A100 80GB GPU.
- Score: 4.198627205271621
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer-based large language models (LLMs) have demonstrated outstanding
performance across diverse domains, particularly when fine-turned for specific
domains. Recent studies suggest that the resources required for fine-tuning
LLMs can be economized through parameter-efficient methods such as Low-Rank
Adaptation (LoRA). While LoRA effectively reduces computational burdens and
resource demands, it currently supports only a single-job fine-tuning setup.
In this paper, we present ASPEN, a high-throughput framework for fine-tuning
LLMs. ASPEN efficiently trains multiple jobs on a single GPU using the LoRA
method, leveraging shared pre-trained model and adaptive scheduling. ASPEN is
compatible with transformer-based language models like LLaMA and ChatGLM, etc.
Experiments show that ASPEN saves 53% of GPU memory when training multiple
LLaMA-7B models on NVIDIA A100 80GB GPU and boosts training throughput by about
17% compared to existing methods when training with various pre-trained models
on different GPUs. The adaptive scheduling algorithm reduces turnaround time by
24%, end-to-end training latency by 12%, prioritizing jobs and preventing
out-of-memory issues.
Related papers
- LoRA Fine-Tuning Without GPUs: A CPU-Efficient Meta-Generation Framework for LLMs [8.397730500554047]
Low-Rank Adapters (LoRAs) have transformed the fine-tuning of Large Language Models (LLMs) by enabling parameter-efficient updates.<n>We propose a theoretically grounded approach to LoRA fine-tuning designed specifically for users with limited computational resources.
arXiv Detail & Related papers (2025-07-02T15:24:47Z) - Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights [75.83625828306839]
textbfDrag-and-Drop LLMs (textitDnD) eliminates per-task training by mapping a handful of unlabeled task prompts directly to LoRA weight updates.<n>A lightweight text encoder distills each prompt batch into condition embeddings, which are then transformed by a cascaded hyper-convolutional decoder into the full set of LoRA matrices.
arXiv Detail & Related papers (2025-06-19T15:38:21Z) - Dynamic Low-Rank Sparse Adaptation for Large Language Models [54.1231638555233]
Low-rank Sparse Adaptation (LoSA) is a novel method that seamlessly integrates low-rank adaptation into sparse LLM sparsity.
LoSA dynamically sparsifies the LoRA outcomes based on the corresponding sparse weights during fine-tuning.
LoSA can efficiently boost the efficacy of sparse LLMs within a few hours, without introducing any additional inferential burden.
arXiv Detail & Related papers (2025-02-20T18:37:32Z) - Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning [57.36978335727009]
Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune large language models (LLMs)
In this paper, we propose a framework that adaptively retrieves and composes multiple LoRAs based on input prompts.
arXiv Detail & Related papers (2024-06-24T05:24:41Z) - LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report [3.304521604464247]
Low Rank Adaptation (LoRA) has emerged as one of the most widely adopted methods for.
Efficient Fine-Tuning (PEFT) of Large Language Models (LLMs)
We aim to assess the viability of training and serving LLMs fine-tuned with LoRA in real-world applications.
arXiv Detail & Related papers (2024-04-29T04:01:45Z) - MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts [3.6301530893494127]
MixLoRA is an approach to construct a resource-efficient sparse MoE model based on LoRA.
Our evaluations show that MixLoRA improves about 9% accuracy compared to state-of-the-art PEFT methods in multi-task learning scenarios.
arXiv Detail & Related papers (2024-04-22T02:15:52Z) - Run LoRA Run: Faster and Lighter LoRA Implementations [50.347242693025336]
LoRA is a technique that reduces the number of trainable parameters in a neural network by introducing low-rank adapters to linear layers.
This paper presents the RunLoRA framework for efficient implementations of LoRA.
Experiments show up to 28% speedup on language modeling networks.
arXiv Detail & Related papers (2023-12-06T10:54:34Z) - MultiLoRA: Democratizing LoRA for Better Multi-Task Learning [20.750808913757396]
LoRA achieves remarkable resource efficiency and comparable performance when adapting LLMs for specific tasks.
LoRA is dominated by a small number of top singular vectors while fine-tuning decomposes into a set of less important unitary transforms.
We propose MultiLoRA for better multi-task adaptation by reducing the dominance of top singular vectors observed in LoRA.
arXiv Detail & Related papers (2023-11-20T02:59:18Z) - S-LoRA: Serving Thousands of Concurrent LoRA Adapters [59.490751234925206]
Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method, is often employed to adapt a base model to a multitude of tasks.
We present S-LoRA, a system designed for the scalable serving of many LoRA adapters.
arXiv Detail & Related papers (2023-11-06T17:26:17Z) - NOLA: Compressing LoRA using Linear Combination of Random Basis [22.76088132446952]
We introduce NOLA, which overcomes the rank one lower bound present in LoRA.
NOLA performs as well as LoRA models with much fewer number of parameters compared to LoRA with rank one, the best compression LoRA can archive.
arXiv Detail & Related papers (2023-10-04T03:30:24Z) - FusionAI: Decentralized Training and Deploying LLMs with Massive
Consumer-Level GPUs [57.12856172329322]
We envision a decentralized system unlocking the potential vast untapped consumer-level GPU.
This system faces critical challenges, including limited CPU and GPU memory, low network bandwidth, the variability of peer and device heterogeneity.
arXiv Detail & Related papers (2023-09-03T13:27:56Z) - CA-LoRA: Adapting Existing LoRA for Compressed LLMs to Enable Efficient Multi-Tasking on Personal Devices [78.16679232748196]
We introduce a Compression-Aware LoRA (CA-LoRA) framework to transfer Large Language Models (LLMs) to other tasks.
Experiment results demonstrate that CA-LoRA outperforms the vanilla LoRA methods applied to a compressed LLM.
The source code of CA-LoRA is available at https://github.com/thunlp/CA-LoRA.
arXiv Detail & Related papers (2023-07-15T04:37:11Z) - LoRA: Low-Rank Adaptation of Large Language Models [71.75808607987281]
Low-Rank Adaptation, or LoRA, freezes the pre-trained model weights and injects trainable rank decomposition into each layer of the Transformer architecture.
For GPT-3, LoRA can reduce the number of trainable parameters by 10,000 times and the computation hardware requirement by 3 times compared to full fine-tuning.
arXiv Detail & Related papers (2021-06-17T17:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.