Towards Modular LLMs by Building and Reusing a Library of LoRAs
- URL: http://arxiv.org/abs/2405.11157v1
- Date: Sat, 18 May 2024 03:02:23 GMT
- Title: Towards Modular LLMs by Building and Reusing a Library of LoRAs
- Authors: Oleksiy Ostapenko, Zhan Su, Edoardo Maria Ponti, Laurent Charlin, Nicolas Le Roux, Matheus Pereira, Lucas Caccia, Alessandro Sordoni,
- Abstract summary: We study how to best build a library of adapters given multi-task data.
We introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters.
To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters.
- Score: 64.43376695346538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing number of parameter-efficient adaptations of a base large language model (LLM) calls for studying whether we can reuse such trained adapters to improve performance for new tasks. We study how to best build a library of adapters given multi-task data and devise techniques for both zero-shot and supervised task generalization through routing in such library. We benchmark existing approaches to build this library and introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters, indirectly optimizing for transfer across the multi-task dataset. To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters for new inputs without the need for retraining. We experiment with several LLMs, such as Phi-2 and Mistral, on a wide array of held-out tasks, verifying that MBC-based adapters and Arrow routing lead to superior generalization to new tasks. We make steps towards creating modular, adaptable LLMs that can match or outperform traditional joint training.
Related papers
- Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models [79.41139393080736]
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities.
In-Context Learning (ICL) and.
Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting.
LLMs to downstream tasks.
We propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning.
arXiv Detail & Related papers (2024-09-30T10:48:20Z) - OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs [44.054569398300266]
One-pass Generation and retrieval framework (OneGen)
OneGen bridges the traditionally separate training approaches for generation and retrieval by incorporating retrieval tokens generated autoregressively.
Results show that integrating generation and retrieval within the same context preserves the generative capabilities of LLMs while improving retrieval performance.
arXiv Detail & Related papers (2024-09-08T16:35:19Z) - MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair [5.006064616335817]
Large Language Models (LLMs) have shown good performance in several software development-related tasks.
This research proposes continual merging and empirically studies the capabilities of merged adapters in Code LLMs.
arXiv Detail & Related papers (2024-08-18T18:45:48Z) - One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models [67.49462724595445]
Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs)
We propose a novel method that involves learning scalable and pluggable virtual tokens for RAG.
arXiv Detail & Related papers (2024-05-30T03:44:54Z) - Adapters: A Unified Library for Parameter-Efficient and Modular Transfer
Learning [109.25673110120906]
We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models.
By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration.
arXiv Detail & Related papers (2023-11-18T13:53:26Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of
Large Language Models [75.25782573728677]
This paper presents a framework for adapter-based parameter-efficient fine-tuning (PEFT) of language models (LLMs)
The framework includes state-of-the-art open-access LLMs such as LLaMA, BLOOM, and GPT-J, as well as widely used adapters such as Series adapters, Parallel adapter, Prompt-based learning and Reparametrization-based methods.
We evaluate the effectiveness of the adapters on fourteen datasets from two different reasoning tasks, Arithmetic Reasoning and Commonsense Reasoning.
arXiv Detail & Related papers (2023-04-04T16:31:37Z) - Multipath agents for modular multitask ML systems [2.579908688646812]
The presented work introduces a novel methodology allowing to define multiple methods as distinct agents.
Agents can collaborate and compete to generate and improve ML models for a given tasks.
arXiv Detail & Related papers (2023-02-06T11:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.