Beyond Task Vectors: Selective Task Arithmetic Based on Importance Metrics
- URL: http://arxiv.org/abs/2411.16139v1
- Date: Mon, 25 Nov 2024 06:59:16 GMT
- Title: Beyond Task Vectors: Selective Task Arithmetic Based on Importance Metrics
- Authors: Tian Bowen, Lai Songning, Wu Jiemin, Shuai Zhihao, Ge Shiming, Yue Yutao,
- Abstract summary: This paper introduces textbfunderlineSelective textbfunderlineTask textbfunderlineArithmetic underlinetextbf(STA), a training-free framework designed to enhance multi-task performance through task-specific parameter fusion.
Experimental results demonstrate that STA achieves superior multi-task performance across benchmarks and excellent performance in task forgetting.
- Score: 0.0
- License:
- Abstract: Pretrained models have revolutionized deep learning by enabling significant performance improvements across a wide range of tasks, leveraging large-scale, pre-learned knowledge representations. However, deploying these models in real-world multi-task learning (MTL) scenarios poses substantial challenges, primarily due to high computational costs and inefficiencies in inference. Traditional approaches such as pruning, quantization, and knowledge distillation have been explored to mitigate these issues, but they often fall short in fully addressing the complexities of multi-task environments. This paper introduces \textbf{\underline{S}}elective \textbf{\underline{T}}ask \textbf{\underline{A}}rithmetic \underline{\textbf{(STA)}}, a training-free framework designed to enhance multi-task performance through task-specific parameter fusion. STA addresses three key challenges: (i) \textbf{Parameter importance diversity: } Recognizing that different tasks relie on distinct parameters, STA employs a loss-sensitive parameter importance metric derived from a first-order Taylor expansion to accurately measure the importance of parameters for each task. (ii) \textbf{Over-reliance on hyperparameter tuning: }By enhancing the sparsity of task vectors through parameter importance metrics, STA reduces the need for extensive hyperparameter tuning, thereby improving the generalization and robustness of the model. (iii) \textbf{Neglect of other abilities in task arithmetic: } Previous works have largely overlooked the potential for more precise task forgetting. STA leverages its parameter importance metric to achieve more controlled and effective task forgetting, minimizing the impact of noisy elements that can degrade model performance. Experimental results demonstrate that STA achieves superior multi-task performance across benchmarks and excellent performance in task forgetting.
Related papers
- MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models [26.980104922985326]
We present MaZO, the first framework specifically designed for multi-task LLM fine-tuning under ZO optimization.
MaZO tackles these challenges at the parameter level through two key innovations: a weight importance metric to identify critical parameters and a multi-task weight update mask to selectively update these parameters.
Experiments demonstrate that MaZO achieves state-of-the-art performance, surpassing even multi-task learning methods designed for first-order optimization.
arXiv Detail & Related papers (2025-02-17T07:28:52Z) - Learning Task Representations from In-Context Learning [73.72066284711462]
Large language models (LLMs) have demonstrated remarkable proficiency in in-context learning.
We introduce an automated formulation for encoding task information in ICL prompts as a function of attention heads.
We show that our method's effectiveness stems from aligning the distribution of the last hidden state with that of an optimally performing in-context-learned model.
arXiv Detail & Related papers (2025-02-08T00:16:44Z) - TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning [14.888918165109244]
Task-Efficient Dynamic transFormer, TADFormer, is a novel PEFT framework that performs task-aware feature adaptation in the fine-grained manner.
TADFormer achieves higher accuracy in dense scene understanding tasks, while reducing the number of trainable parameters by up to 8.4 times.
arXiv Detail & Related papers (2025-01-08T05:35:07Z) - Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective [125.00228936051657]
We introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features.
By fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks.
arXiv Detail & Related papers (2024-07-24T09:30:04Z) - PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - InterroGate: Learning to Share, Specialize, and Prune Representations
for Multi-task Learning [17.66308231838553]
We propose a novel multi-task learning (MTL) architecture designed to mitigate task interference while optimizing inference computational efficiency.
We employ a learnable gating mechanism to automatically balance the shared and task-specific representations while preserving the performance of all tasks.
arXiv Detail & Related papers (2024-02-26T18:59:52Z) - VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense
Scene Understanding [6.816428690763012]
A standard approach to leverage large-scale pre-trained models is to fine-tune all model parameters for downstream tasks.
We propose VMT-Adapter, which shares knowledge from multiple tasks to enhance cross-task interaction.
We also propose VMT-Adapter-Lite, which further reduces the trainable parameters by learning shared parameters between down- and up-projections.
arXiv Detail & Related papers (2023-12-14T08:25:04Z) - Task Difficulty Aware Parameter Allocation & Regularization for Lifelong
Learning [20.177260510548535]
We propose the Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.
Our method is scalable and significantly reduces the model's redundancy while improving the model's performance.
arXiv Detail & Related papers (2023-04-11T15:38:21Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.