Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision
Tasks
- URL: http://arxiv.org/abs/2210.03265v1
- Date: Fri, 7 Oct 2022 00:25:02 GMT
- Title: Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision
Tasks
- Authors: Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, Zsolt Kira
- Abstract summary: We propose Polyhistor and Polyhistor-Lite to share information across different tasks with a few trainable parameters.
Specifically, Polyhistor achieves competitive accuracy compared to the state-of-the-art while only using 10% of their trainable parameters.
- Score: 36.34331439747556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adapting large-scale pretrained models to various downstream tasks via
fine-tuning is a standard method in machine learning. Recently,
parameter-efficient fine-tuning methods show promise in adapting a pretrained
model to different tasks while training only a few parameters. Despite their
success, most existing methods are proposed in Natural Language Processing
tasks with language Transformers, and adaptation to Computer Vision tasks with
Vision Transformers remains under-explored, especially for dense vision tasks.
Further, in multi-task settings, individually fine-tuning and storing separate
models for different tasks is inefficient. In this work, we provide an
extensive multi-task parameter-efficient benchmark and examine existing
parameter-efficient fine-tuning NLP methods for vision tasks. Our results on
four different dense vision tasks showed that existing methods cannot be
efficiently integrated due to the hierarchical nature of the Hierarchical
Vision Transformers. To overcome this issue, we propose Polyhistor and
Polyhistor-Lite, consisting of Decomposed HyperNetworks and Layer-wise Scaling
Kernels, to share information across different tasks with a few trainable
parameters. This leads to favorable performance improvements against existing
parameter-efficient methods while using fewer trainable parameters.
Specifically, Polyhistor achieves competitive accuracy compared to the
state-of-the-art while only using ~10% of their trainable parameters.
Furthermore, our methods show larger performance gains when large networks and
more pretraining data are used.
Related papers
- VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense
Scene Understanding [6.816428690763012]
A standard approach to leverage large-scale pre-trained models is to fine-tune all model parameters for downstream tasks.
We propose VMT-Adapter, which shares knowledge from multiple tasks to enhance cross-task interaction.
We also propose VMT-Adapter-Lite, which further reduces the trainable parameters by learning shared parameters between down- and up-projections.
arXiv Detail & Related papers (2023-12-14T08:25:04Z) - Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning [30.251155072822055]
Prototype-based HyperAdapter (PHA) is a novel framework built on the adapter-tuning and hypernetwork.
It introduces an instance-dense retriever and prototypical hypernetwork to generate conditional modules in a sample-efficient manner.
We show that PHA strikes a better trade-off between trainable parameters, accuracy on stream tasks, and sample efficiency.
arXiv Detail & Related papers (2023-10-18T02:42:17Z) - Parameter Efficient Multi-task Model Fusion with Partial Linearization [97.23530944186078]
We propose a novel method to improve multi-task fusion for parameter-efficient fine-tuning techniques.
Our approach partially linearizes only the adapter modules and applies task arithmetic over the linearized adapters.
We demonstrate that our partial linearization technique enables a more effective fusion of multiple tasks into a single model.
arXiv Detail & Related papers (2023-10-07T08:55:54Z) - Prompt Guided Transformer for Multi-Task Dense Prediction [14.815576352301322]
We introduce a lightweight task-conditional model called Prompt Guided Transformer to optimize performance and model parameters.
Our approach achieves state-of-the-art results among task-conditional methods while using fewer parameters, and maintains a significant balance between performance and parameter size.
arXiv Detail & Related papers (2023-07-28T07:25:57Z) - Pro-tuning: Unified Prompt Tuning for Vision Tasks [133.12978197265596]
Fine-tuning is the de-facto approach to leverage pre-trained vision models to perform downstream tasks.
In this work, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks.
arXiv Detail & Related papers (2022-07-28T21:09:31Z) - DiSparse: Disentangled Sparsification for Multitask Model Compression [92.84435347164435]
DiSparse is a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme.
Our experimental results demonstrate superior performance on various configurations and settings.
arXiv Detail & Related papers (2022-06-09T17:57:46Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Parameter-efficient Multi-task Fine-tuning for Transformers via Shared
Hypernetworks [37.2958914602899]
We show that we can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks.
Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only 0.29% parameters per task.
arXiv Detail & Related papers (2021-06-08T16:16:40Z) - Parameter-Efficient Transfer Learning with Diff Pruning [108.03864629388404]
diff pruning is a simple approach to enable parameter-efficient transfer learning within the pretrain-finetune framework.
We find that models finetuned with diff pruning can match the performance of fully finetuned baselines on the GLUE benchmark.
arXiv Detail & Related papers (2020-12-14T12:34:01Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.