Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language
Models
- URL: http://arxiv.org/abs/2403.03432v1
- Date: Wed, 6 Mar 2024 03:33:48 GMT
- Title: Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language
Models
- Authors: Wenfeng Feng and Chuzhan Hao and Yuewei Zhang and Yu Han and Hao Wang
- Abstract summary: We propose the Mixture-of-LoRAs (MoA) architecture for multi-task learning with large language models (LLMs)
Multiple domain-specific LoRA modules can be aligned with the expert design principles observed in Mixture-of-Experts (MoE)
Each LoRA model can be iteratively adapted to a new domain, allowing for quick domain-specific adaptation.
- Score: 7.966452497550907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instruction Tuning has the potential to stimulate or enhance specific
capabilities of large language models (LLMs). However, achieving the right
balance of data is crucial to prevent catastrophic forgetting and interference
between tasks. To address these limitations and enhance training flexibility,
we propose the Mixture-of-LoRAs (MoA) architecture which is a novel and
parameter-efficient tuning method designed for multi-task learning with LLMs.
In this paper, we start by individually training multiple domain-specific LoRA
modules using corresponding supervised corpus data. These LoRA modules can be
aligned with the expert design principles observed in Mixture-of-Experts (MoE).
Subsequently, we combine the multiple LoRAs using an explicit routing strategy
and introduce domain labels to facilitate multi-task learning, which help
prevent interference between tasks and ultimately enhances the performance of
each individual task. Furthermore, each LoRA model can be iteratively adapted
to a new domain, allowing for quick domain-specific adaptation. Experiments on
diverse tasks demonstrate superior and robust performance, which can further
promote the wide application of domain-specific LLMs.
Related papers
- In-Context Meta LoRA Generation [61.690065588534296]
Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task specific fine-tuning.
We propose In-Context Meta LoRA (ICM-LoRA), a novel approach that efficiently achieves task-specific customization of large language models.
ICM-LoRA enables more accurate LoRA parameter reconstruction than current parameter reconstruction methods.
arXiv Detail & Related papers (2025-01-29T13:12:01Z) - Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning [53.98941571078398]
Low-Rank Adaptation (LoRA) is widely used for adapting large language models (LLMs) to specific domains due to its efficiency and modularity.
Recent works adopt Mixture of Experts (MoE) by treating each LoRA module as an expert, thereby mitigating task interference through multiple specialized LoRA modules.
While effective, these methods often isolate knowledge within individual tasks, failing to fully exploit the shared knowledge across related tasks.
We propose Single-ranked Mixture of Experts LoRA (textbfSMoRA), which embeds MoE into LoRA by textittreating each rank as an
arXiv Detail & Related papers (2025-01-25T06:56:39Z) - Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs [76.40876036912537]
Large Language Models (LLMs) demonstrate strong few-shot adaptability without requiring fine-tuning.
Current Visual Foundation Models (VFMs) require explicit fine-tuning with sufficient tuning data.
We propose a framework, LoRA Recycle, that distills a meta-LoRA from diverse pre-tuned LoRAs with a meta-learning objective.
arXiv Detail & Related papers (2024-12-03T07:25:30Z) - MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning [74.43869839954168]
We propose MTL-LoRA, which retains the advantages of low-rank adaptation while significantly enhancing multi-task learning capabilities.
MTL-LoRA augments LoRA by incorporating additional task-adaptive parameters that differentiate task-specific information.
This approach enables large language models (LLMs) pre-trained on general corpus to adapt to different target task domains with a limited number of trainable parameters.
arXiv Detail & Related papers (2024-10-12T08:32:26Z) - BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models [0.0]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs)
BoRA addresses trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors.
Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks.
arXiv Detail & Related papers (2024-07-08T06:38:50Z) - Multimodal Instruction Tuning with Conditional Mixture of LoRA [51.58020580970644]
This paper introduces a novel approach that integrates multimodal instruction tuning with Low-Rank Adaption (LoRA)
It innovates upon LoRA by dynamically constructing low-rank adaptation matrices tailored to the unique demands of each input instance.
Experimental results on various multimodal evaluation datasets indicate that MixLoRA not only outperforms the conventional LoRA with the same or even higher ranks.
arXiv Detail & Related papers (2024-02-24T20:15:31Z) - LLaVA-MoLE: Sparse Mixture of LoRA Experts for Mitigating Data Conflicts
in Instruction Finetuning MLLMs [29.96139552754377]
We propose an efficient Mixture of Experts (MoE) design for instruction finetuning MLLMs.
Extensive experiments proved that LLaVA-MoLE effectively mitigates the data conflict issue when mixing multiple distinct instruction datasets.
LLaVA-MoLE can even outperform the plain-LoRA baseline trained with twice the samples.
arXiv Detail & Related papers (2024-01-29T13:48:36Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.