Multi-Head Adapter Routing for Cross-Task Generalization
- URL: http://arxiv.org/abs/2211.03831v3
- Date: Mon, 13 Nov 2023 15:09:59 GMT
- Title: Multi-Head Adapter Routing for Cross-Task Generalization
- Authors: Lucas Caccia, Edoardo Ponti, Zhan Su, Matheus Pereira, Nicolas Le
Roux, Alessandro Sordoni
- Abstract summary: Polytropon learns an inventory of adapters and a routing function that selects a subset of adapters for each task during both pre-training and few-shot adaptation.
We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation.
- Score: 56.75667096355806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists
in pre-training adapters on a multi-task training set before few-shot
adaptation to test tasks. Polytropon [Ponti et al., 2023] ($\texttt{Poly}$)
jointly learns an inventory of adapters and a routing function that selects a
(variable-size) subset of adapters for each task during both pre-training and
few-shot adaptation. In this paper, we investigate the role that adapter
routing plays in its success and design new variants based on our findings.
First, we build on the intuition that finer-grained routing provides more
expressivity. Hence, we propose $\texttt{MHR}$ (Multi-Head Routing) which
combines subsets of adapter parameters and outperforms $\texttt{Poly}$ under a
comparable parameter budget; by only fine-tuning the routing function and not
the adapters ($\texttt{MHR}$-$z$) we achieve competitive performance with
extreme parameter efficiency. Second, we find that
$\texttt{Poly}$/$\texttt{MHR}$ performance is a result of better multi-task
optimization, rather than modular inductive biases that facilitate adapter
recombination and local adaptation, as previously hypothesized. In fact, we
find that $\texttt{MHR}$ exhibits high gradient alignment between training
tasks. We find that routing is most beneficial during multi-task pre-training
rather than during few-shot adaptation and propose $\texttt{MHR}$-$\mu$, which
discards routing and fine-tunes the average of the pre-trained adapters on each
downstream tasks. This establishes $\texttt{MHR}$-$\mu$ as an effective method
for single-adapter fine-tuning. We also show that $\texttt{MHR}$-$\mu$ can be
used as an effective zero-shot transfer method by training the average of the
pre-trained adapters for a few additional steps on the multi-task training set:
this yields gains up to 3% on absolute accuracy w.r.t. the baselines.
Related papers
- Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models [12.230087530720652]
We introduce an adapter module that has a better efficiency in large scale multi-task adaptation scenario.
The adapter consists of a single shared controller network and multiple task-level adapter heads.
arXiv Detail & Related papers (2024-03-25T17:21:56Z) - Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning [30.251155072822055]
Prototype-based HyperAdapter (PHA) is a novel framework built on the adapter-tuning and hypernetwork.
It introduces an instance-dense retriever and prototypical hypernetwork to generate conditional modules in a sample-efficient manner.
We show that PHA strikes a better trade-off between trainable parameters, accuracy on stream tasks, and sample efficiency.
arXiv Detail & Related papers (2023-10-18T02:42:17Z) - MerA: Merging Pretrained Adapters For Few-Shot Learning [71.44422347502409]
We propose textbftextttMerging Pretrained Adapters (MerA) that efficiently incorporates pretrained adapters to a single model through model fusion.
Experiments on two PLMs demonstrate that MerA substantial improvements compared to both single adapters and AdapterFusion.
arXiv Detail & Related papers (2023-08-30T12:10:17Z) - Vision Transformer Adapters for Generalizable Multitask Learning [61.79647180647685]
We introduce the first multitasking vision transformer adapters that learn generalizable task affinities.
Our adapters can simultaneously solve multiple dense vision tasks in a parameter-efficient manner.
In contrast to concurrent methods, we do not require retraining or fine-tuning whenever a new task or domain is added.
arXiv Detail & Related papers (2023-08-23T18:40:48Z) - Cross-Modal Adapter for Text-Video Retrieval [91.9575196703281]
We present a novel $textbfCross-Modal Adapter$ for parameter-efficient fine-tuning.
Inspired by adapter-based methods, we adjust the pre-trained model with a few parameterization layers.
It achieves superior or comparable performance compared to fully fine-tuned methods on MSR-VTT, MSVD, VATEX, ActivityNet, and DiDeMo datasets.
arXiv Detail & Related papers (2022-11-17T16:15:30Z) - AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large
Language Models [119.7093605087114]
Fine-tuning large-scale pre-trained language models to downstream tasks require updating hundreds of millions of parameters.
This not only increases the serving cost to store a large copy of the model weights for every task, but also exhibits instability during few-shot task adaptation.
We introduce a new mechanism to improve adapter capacity without increasing parameters or computational cost by two key techniques.
arXiv Detail & Related papers (2022-05-24T23:41:22Z) - AdapterBias: Parameter-efficient Token-dependent Representation Shift
for Adapters in NLP Tasks [55.705355299065474]
Transformer-based pre-trained models with millions of parameters require large storage.
Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters.
In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed.
arXiv Detail & Related papers (2022-04-30T16:49:41Z) - Parameter-efficient Multi-task Fine-tuning for Transformers via Shared
Hypernetworks [37.2958914602899]
We show that we can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks.
Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only 0.29% parameters per task.
arXiv Detail & Related papers (2021-06-08T16:16:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.