MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource
Visual Question Answering
- URL: http://arxiv.org/abs/2303.01239v2
- Date: Wed, 7 Jun 2023 12:02:51 GMT
- Title: MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource
Visual Question Answering
- Authors: Jingjing Jiang, Nanning Zheng
- Abstract summary: Finetuning pretrained Vision-Language Models (VLMs) has been a prevailing paradigm for achieving state-of-the-art performance in Visual Question Answering (VQA)
Current parameter-efficient tuning methods dramatically reduce the number of tunable parameters, but there still exists a significant performance gap with full finetuning.
We propose MixPHM, a redundancy-aware parameter-efficient tuning method that outperforms full finetuning in low-resource VQA.
- Score: 66.05768870785548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, finetuning pretrained Vision-Language Models (VLMs) has been a
prevailing paradigm for achieving state-of-the-art performance in Visual
Question Answering (VQA). However, as VLMs scale, finetuning full model
parameters for a given task in low-resource settings becomes computationally
expensive, storage inefficient, and prone to overfitting. Current
parameter-efficient tuning methods dramatically reduce the number of tunable
parameters, but there still exists a significant performance gap with full
finetuning. In this paper, we propose MixPHM, a redundancy-aware
parameter-efficient tuning method that outperforms full finetuning in
low-resource VQA. Specifically, MixPHM is a lightweight module implemented by
multiple PHM-experts in a mixture-of-experts manner. To reduce parameter
redundancy, MixPHM reparameterizes expert weights in a low-rank subspace and
shares part of the weights inside and across experts. Moreover, based on a
quantitative redundancy analysis for adapters, we propose Redundancy
Regularization to reduce task-irrelevant redundancy while promoting
task-relevant correlation in MixPHM representations. Experiments conducted on
VQA v2, GQA, and OK-VQA demonstrate that MixPHM outperforms state-of-the-art
parameter-efficient methods and is the only one consistently surpassing full
finetuning.
Related papers
- LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method that effectively adapts large pre-trained models for downstream tasks.
We propose a novel approach that employs a low rank tensor parametrization for model updates.
Our method is both efficient and effective for fine-tuning large language models, achieving a substantial reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts [6.245113492272563]
Mixture of Dyadic Experts (MoDE) is a novel design for efficient multi-task adaptation.
Our design allows for more fine-grained mixing, thereby increasing the model's ability to jointly handle multiple tasks.
arXiv Detail & Related papers (2024-08-02T18:05:10Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - LoRA Meets Dropout under a Unified Framework [38.5176197615878]
Large language models (LLMs) have emerged as essential elements in numerous NLP applications.
Various dropout methods, initially designed for full finetuning with all the parameters updated, alleviates overfitting associated with excessive parameter redundancy.
We introduce a unified framework for a comprehensive investigation, which instantiates these methods based on dropping position, structural pattern and compensation measure.
arXiv Detail & Related papers (2024-02-25T07:09:10Z) - QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources [37.265708531464746]
Large Language Models (LLMs) have showcased remarkable impacts across a wide spectrum of natural language processing tasks.
Fine-tuning these pre-trained models on downstream datasets provides further significant performance gains, but this process has been challenging due to its extraordinary resource requirements.
We propose QFT, a novel Quantized Full- parameter Tuning framework for LLMs that enables memory-efficient fine-tuning without harming performance.
arXiv Detail & Related papers (2023-10-11T02:47:40Z) - Parameter-Efficient Fine-Tuning without Introducing New Latency [7.631596468553607]
We introduce a novel adapter technique that directly applies the adapter to pre-trained parameters instead of the hidden representation.
Our proposed method attains a new state-of-the-art outcome in terms of both performance and storage efficiency, storing only 0.03% parameters of full fine-tuning.
arXiv Detail & Related papers (2023-05-26T08:44:42Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning [112.97430455461097]
We propose a general PEFT method that tunes a mixture of adaptation modules introduced in each Transformer layer while keeping most of the PLM weights frozen.
By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks.
arXiv Detail & Related papers (2022-10-31T16:23:36Z) - Amortized Auto-Tuning: Cost-Efficient Transfer Optimization for
Hyperparameter Recommendation [83.85021205445662]
We propose an instantiation--amortized auto-tuning (AT2) to speed up tuning of machine learning models.
We conduct a thorough analysis of the multi-task multi-fidelity Bayesian optimization framework, which leads to the best instantiation--amortized auto-tuning (AT2)
arXiv Detail & Related papers (2021-06-17T00:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.