Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
- URL: http://arxiv.org/abs/2303.13220v1
- Date: Thu, 23 Mar 2023 12:34:30 GMT
- Title: Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
- Authors: Vaishali Pal, Carlos Lassance, Herv\'e D\'ejean, St\'ephane Clinchant
- Abstract summary: We study adapters for SPLADE, a sparse retriever, for which adapters retain the efficiency and effectiveness otherwise achieved by finetuning.
We also address domain adaptation of neural retrieval thanks to adapters on cross-domain BEIR datasets and TripClick.
- Score: 4.9545244468634655
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Parameter-Efficient transfer learning with Adapters have been studied in
Natural Language Processing (NLP) as an alternative to full fine-tuning.
Adapters are memory-efficient and scale well with downstream tasks by training
small bottle-neck layers added between transformer layers while keeping the
large pretrained language model (PLMs) frozen. In spite of showing promising
results in NLP, these methods are under-explored in Information Retrieval.
While previous studies have only experimented with dense retriever or in a
cross lingual retrieval scenario, in this paper we aim to complete the picture
on the use of adapters in IR. First, we study adapters for SPLADE, a sparse
retriever, for which adapters not only retain the efficiency and effectiveness
otherwise achieved by finetuning, but are memory-efficient and orders of
magnitude lighter to train. We observe that Adapters-SPLADE not only optimizes
just 2\% of training parameters, but outperforms fully fine-tuned counterpart
and existing parameter-efficient dense IR models on IR benchmark datasets.
Secondly, we address domain adaptation of neural retrieval thanks to adapters
on cross-domain BEIR datasets and TripClick. Finally, we also consider
knowledge sharing between rerankers and first stage rankers. Overall, our study
complete the examination of adapters for neural IR
Related papers
- Parameter-Efficient Fine-Tuning With Adapters [5.948206235442328]
This research introduces a novel adaptation method utilizing the UniPELT framework as a base.
Our method employs adapters, which enable efficient transfer of pretrained models to new tasks with minimal retraining of the base model parameters.
arXiv Detail & Related papers (2024-05-09T01:40:38Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - MerA: Merging Pretrained Adapters For Few-Shot Learning [71.44422347502409]
We propose textbftextttMerging Pretrained Adapters (MerA) that efficiently incorporates pretrained adapters to a single model through model fusion.
Experiments on two PLMs demonstrate that MerA substantial improvements compared to both single adapters and AdapterFusion.
arXiv Detail & Related papers (2023-08-30T12:10:17Z) - Revisiting the Parameter Efficiency of Adapters from the Perspective of
Precision Redundancy [17.203320079872952]
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained vision models.
With the exponential growth of model sizes, the conventional full fine-tuning leads to increasingly huge storage and transmission overhead.
In this paper, we investigate how to make adapters even more efficient, reaching a new minimum size required to store a task-specific fine-tuned network.
arXiv Detail & Related papers (2023-07-31T17:22:17Z) - Towards Efficient Visual Adaption via Structural Re-parameterization [76.57083043547296]
We propose a parameter-efficient and computational friendly adapter for giant vision models, called RepAdapter.
RepAdapter outperforms full tuning by +7.2% on average and saves up to 25% training time, 20% GPU memory, and 94.6% storage cost of ViT-B/16 on VTAB-1k.
arXiv Detail & Related papers (2023-02-16T06:14:15Z) - CHAPTER: Exploiting Convolutional Neural Network Adapters for
Self-supervised Speech Models [62.60723685118747]
Self-supervised learning (SSL) is a powerful technique for learning representations from unlabeled data.
We propose an efficient tuning method specifically designed for SSL speech model, by applying CNN adapters at the feature extractor.
We empirically found that adding CNN to the feature extractor can help the adaptation on emotion and speaker tasks.
arXiv Detail & Related papers (2022-12-01T08:50:12Z) - Adaptable Adapters [74.65986170056945]
State-of-the-art pretrained NLP models contain a hundred million to trillion parameters.
Adaptable adapters contain different activation functions for different layers and different input data.
We show that adaptable adapters achieve on-par performances with the standard adapter architecture.
arXiv Detail & Related papers (2022-05-03T14:59:27Z) - AdapterBias: Parameter-efficient Token-dependent Representation Shift
for Adapters in NLP Tasks [55.705355299065474]
Transformer-based pre-trained models with millions of parameters require large storage.
Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters.
In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed.
arXiv Detail & Related papers (2022-04-30T16:49:41Z) - On the Effectiveness of Adapter-based Tuning for Pretrained Language
Model Adaptation [36.37565646597464]
adapter-based tuning works by adding light-weight adapter modules to a pretrained language model (PrLM)
It adds only a few trainable parameters per new task, allowing a high degree of parameter sharing.
We demonstrate that adapter-based tuning outperforms fine-tuning on low-resource and cross-lingual tasks.
arXiv Detail & Related papers (2021-06-06T16:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.