Beyond Task Vectors: Selective Task Arithmetic Based on Importance Metrics
- URL: http://arxiv.org/abs/2411.16139v1
- Date: Mon, 25 Nov 2024 06:59:16 GMT
- Title: Beyond Task Vectors: Selective Task Arithmetic Based on Importance Metrics
- Authors: Tian Bowen, Lai Songning, Wu Jiemin, Shuai Zhihao, Ge Shiming, Yue Yutao,
- Abstract summary: This paper introduces textbfunderlineSelective textbfunderlineTask textbfunderlineArithmetic underlinetextbf(STA), a training-free framework designed to enhance multi-task performance through task-specific parameter fusion.
Experimental results demonstrate that STA achieves superior multi-task performance across benchmarks and excellent performance in task forgetting.
- Score: 0.0
- License:
- Abstract: Pretrained models have revolutionized deep learning by enabling significant performance improvements across a wide range of tasks, leveraging large-scale, pre-learned knowledge representations. However, deploying these models in real-world multi-task learning (MTL) scenarios poses substantial challenges, primarily due to high computational costs and inefficiencies in inference. Traditional approaches such as pruning, quantization, and knowledge distillation have been explored to mitigate these issues, but they often fall short in fully addressing the complexities of multi-task environments. This paper introduces \textbf{\underline{S}}elective \textbf{\underline{T}}ask \textbf{\underline{A}}rithmetic \underline{\textbf{(STA)}}, a training-free framework designed to enhance multi-task performance through task-specific parameter fusion. STA addresses three key challenges: (i) \textbf{Parameter importance diversity: } Recognizing that different tasks relie on distinct parameters, STA employs a loss-sensitive parameter importance metric derived from a first-order Taylor expansion to accurately measure the importance of parameters for each task. (ii) \textbf{Over-reliance on hyperparameter tuning: }By enhancing the sparsity of task vectors through parameter importance metrics, STA reduces the need for extensive hyperparameter tuning, thereby improving the generalization and robustness of the model. (iii) \textbf{Neglect of other abilities in task arithmetic: } Previous works have largely overlooked the potential for more precise task forgetting. STA leverages its parameter importance metric to achieve more controlled and effective task forgetting, minimizing the impact of noisy elements that can degrade model performance. Experimental results demonstrate that STA achieves superior multi-task performance across benchmarks and excellent performance in task forgetting.
Related papers
- Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective [125.00228936051657]
We introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features.
By fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks.
arXiv Detail & Related papers (2024-07-24T09:30:04Z) - PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - InterroGate: Learning to Share, Specialize, and Prune Representations
for Multi-task Learning [17.66308231838553]
We propose a novel multi-task learning (MTL) architecture designed to mitigate task interference while optimizing inference computational efficiency.
We employ a learnable gating mechanism to automatically balance the shared and task-specific representations while preserving the performance of all tasks.
arXiv Detail & Related papers (2024-02-26T18:59:52Z) - VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense
Scene Understanding [6.816428690763012]
A standard approach to leverage large-scale pre-trained models is to fine-tune all model parameters for downstream tasks.
We propose VMT-Adapter, which shares knowledge from multiple tasks to enhance cross-task interaction.
We also propose VMT-Adapter-Lite, which further reduces the trainable parameters by learning shared parameters between down- and up-projections.
arXiv Detail & Related papers (2023-12-14T08:25:04Z) - Task Difficulty Aware Parameter Allocation & Regularization for Lifelong
Learning [20.177260510548535]
We propose the Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.
Our method is scalable and significantly reduces the model's redundancy while improving the model's performance.
arXiv Detail & Related papers (2023-04-11T15:38:21Z) - AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task
Learning [19.201899503691266]
We measure the task dominance degree of a parameter by the total updates of each task on this parameter.
We propose a Task-wise Adaptive learning rate approach, AdaTask, to separate the emphaccumulative gradients and hence the learning rate of each task.
Experiments on computer vision and recommender system MTL datasets demonstrate that AdaTask significantly improves the performance of dominated tasks.
arXiv Detail & Related papers (2022-11-28T04:24:38Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark
for Semantic and Generative Capabilities [76.97949110580703]
We introduce SUPERB-SG, a new benchmark to evaluate pre-trained models across various speech tasks.
We use a lightweight methodology to test the robustness of representations learned by pre-trained models under shifts in data domain.
We also show that the task diversity of SUPERB-SG coupled with limited task supervision is an effective recipe for evaluating the generalizability of model representation.
arXiv Detail & Related papers (2022-03-14T04:26:40Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.