LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of
Large Language Models
- URL: http://arxiv.org/abs/2304.01933v3
- Date: Mon, 9 Oct 2023 15:38:46 GMT
- Title: LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of
Large Language Models
- Authors: Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing,
Xing Xu, Soujanya Poria, Roy Ka-Wei Lee
- Abstract summary: This paper presents a framework for adapter-based parameter-efficient fine-tuning (PEFT) of language models (LLMs)
The framework includes state-of-the-art open-access LLMs such as LLaMA, BLOOM, and GPT-J, as well as widely used adapters such as Series adapters, Parallel adapter, Prompt-based learning and Reparametrization-based methods.
We evaluate the effectiveness of the adapters on fourteen datasets from two different reasoning tasks, Arithmetic Reasoning and Commonsense Reasoning.
- Score: 75.25782573728677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of large language models (LLMs), like GPT-4 and ChatGPT, has led
to the development of numerous cost-effective and accessible alternatives that
are created by finetuning open-access LLMs with task-specific data (e.g.,
ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine-tuning
methods, adapter-based parameter-efficient fine-tuning (PEFT) is undoubtedly
one of the most attractive topics, as it only requires fine-tuning a few
external parameters instead of the entire LLMs while achieving comparable or
even better performance. To enable further research on PEFT methods of LLMs,
this paper presents LLM-Adapters, an easy-to-use framework that integrates
various adapters into LLMs and can execute these adapter-based PEFT methods of
LLMs for different tasks. The framework includes state-of-the-art open-access
LLMs such as LLaMA, BLOOM, and GPT-J, as well as widely used adapters such as
Series adapters, Parallel adapter, Prompt-based learning and
Reparametrization-based methods. Moreover, we conduct extensive empirical
studies on the impact of adapter types, placement locations, and
hyper-parameters to the best design for each adapter-based methods. We evaluate
the effectiveness of the adapters on fourteen datasets from two different
reasoning tasks, Arithmetic Reasoning and Commonsense Reasoning. The results
demonstrate that using adapter-based PEFT in smaller-scale LLMs (7B) with few
extra trainable parameters yields comparable, and in some cases superior,
performance to powerful LLMs (175B) in zero-shot inference on both reasoning
tasks.
Related papers
- 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability [6.451743797015637]
We introduce a novel method, RoAd, which employs a straightforward 2D rotation to adapt large language models (LLMs)
RoAd is remarkably parameter-efficient, delivering optimal performance on GLUE, eight commonsense reasoning tasks and four arithmetic reasoning tasks with $0.1%$ trainable parameters.
arXiv Detail & Related papers (2024-08-28T08:45:29Z) - Extend Model Merging from Fine-Tuned to Pre-Trained Large Language Models via Weight Disentanglement [72.97553348776425]
We make a pioneering effort to broaden the applicability of merging techniques from FT to PT LLMs.
We introduce an approach based on WeIght DisENtanglement (WIDEN) to effectively extend the merging scope.
We merge Qwen1.5-Chat (an FT LLM with instruction-following skills) with Sailor (a PT LLM with multilingual abilities) across 7B and 14B model scales.
arXiv Detail & Related papers (2024-08-06T10:46:46Z) - Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models [79.46938238953916]
Fine-tuning large language models (LLMs) to diverse applications is crucial to meet complex demands.
Recent studies suggest decomposing a fine-tuned LLM into a base model and corresponding delta weights, which are then compressed using low-rank or low-bit approaches to reduce costs.
In this work, we observe that existing low-rank and low-bit compression methods can significantly harm the model performance for task-specific fine-tuned LLMs.
arXiv Detail & Related papers (2024-06-13T07:57:27Z) - An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models [14.202759186103497]
Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in multimodal tasks.
However, fine-tuning all parameters of MLLMs has become challenging as they usually contain billions of parameters.
This paper conducts empirical studies using four popular PEFT methods to fine-tune the LLM component of open-source MLLMs.
arXiv Detail & Related papers (2024-06-07T17:58:11Z) - One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models [67.49462724595445]
Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs)
We propose a novel method that involves learning scalable and pluggable virtual tokens for RAG.
arXiv Detail & Related papers (2024-05-30T03:44:54Z) - Towards Modular LLMs by Building and Reusing a Library of LoRAs [64.43376695346538]
We study how to best build a library of adapters given multi-task data.
We introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters.
To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters.
arXiv Detail & Related papers (2024-05-18T03:02:23Z) - KnowLA: Enhancing Parameter-efficient Finetuning with Knowledgeable Adaptation [18.593612008576265]
We propose a knowledgeable adaptation method called KnowLA.
It inserts an adaptation layer into an LLM to integrate the embeddings of entities appearing in the input text.
arXiv Detail & Related papers (2024-03-22T04:48:41Z) - Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation
with Large Language Models [12.708117108874083]
Large Language Models (LLMs) generate code snippets given natural language intents in zero-shot, i.e., without the need for specific fine-tuning.
Previous research explored In-Context Learning (ICL) as a strategy to guide the LLM generative process with task-specific prompt examples.
In this paper, we deliver a comprehensive study of.
PEFT techniques for LLMs under the automated code generation scenario.
arXiv Detail & Related papers (2023-08-21T04:31:06Z) - Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large
Language Models [77.2078051555533]
We propose a novel and affordable solution for the effective VL adaption of large language models (LLMs)
Instead of using large neural networks to connect the image encoder and LLM, MMA adopts lightweight modules, i.e., adapters.
MMA is also equipped with a routing algorithm to help LLMs achieve an automatic shift between single- and multi-modal instructions.
arXiv Detail & Related papers (2023-05-24T11:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.