Measuring Fine-Grained Relatedness in Multitask Learning via Data Attribution
- URL: http://arxiv.org/abs/2505.21438v1
- Date: Tue, 27 May 2025 17:13:31 GMT
- Title: Measuring Fine-Grained Relatedness in Multitask Learning via Data Attribution
- Authors: Yiwen Tu, Ziqi Liu, Jiaqi W. Ma, Weijing Tang,
- Abstract summary: Measuring task relatedness and mitigating negative transfer remain a critical open challenge in Multitask Learning.<n>We propose the MultiTask Influence Function (MTIF), a method that adapts influence functions to MTL models with hard or soft parameter sharing.<n>Our work establishes a novel connection between data attribution and MTL, offering an efficient and fine-grained solution for measuring task relatedness.
- Score: 10.818917537653688
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Measuring task relatedness and mitigating negative transfer remain a critical open challenge in Multitask Learning (MTL). This work extends data attribution -- which quantifies the influence of individual training data points on model predictions -- to MTL setting for measuring task relatedness. We propose the MultiTask Influence Function (MTIF), a method that adapts influence functions to MTL models with hard or soft parameter sharing. Compared to conventional task relatedness measurements, MTIF provides a fine-grained, instance-level relatedness measure beyond the entire-task level. This fine-grained relatedness measure enables a data selection strategy to effectively mitigate negative transfer in MTL. Through extensive experiments, we demonstrate that the proposed MTIF efficiently and accurately approximates the performance of models trained on data subsets. Moreover, the data selection strategy enabled by MTIF consistently improves model performance in MTL. Our work establishes a novel connection between data attribution and MTL, offering an efficient and fine-grained solution for measuring task relatedness and enhancing MTL models.
Related papers
- Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning [27.472039054277644]
Rep-MTL exploits the representation-level task saliency to quantify interactions between task-specific optimization and shared representation learning.<n>Rep-MTL aims to mitigate negative transfer by maintaining the effective training of individual tasks instead pure conflict-solving.
arXiv Detail & Related papers (2025-07-28T17:59:28Z) - MTL-UE: Learning to Learn Nothing for Multi-Task Learning [98.42358524454731]
This paper presents MTL-UE, the first unified framework for generating unlearnable examples for multi-task data and MTL models.<n>Instead of optimizing robustness for each sample, we design a generator-based structure that introduces label priors and class-wise feature embeddings.<n>In addition, MTL-UE incorporates intra-task and inter-task embedding regularization to increase inter-class separation and suppress intra-class variance.
arXiv Detail & Related papers (2025-05-08T14:26:00Z) - Empowering Large Language Models in Wireless Communication: A Novel Dataset and Fine-Tuning Framework [81.29965270493238]
We develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) for wireless communication applications.<n>The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard.<n>We introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data.
arXiv Detail & Related papers (2025-01-16T16:19:53Z) - R-MTLLMF: Resilient Multi-Task Large Language Model Fusion at the Wireless Edge [78.26352952957909]
Multi-task large language models (MTLLMs) are important for many applications at the wireless edge, where users demand specialized models to handle multiple tasks efficiently.<n>The concept of model fusion via task vectors has emerged as an efficient approach for combining fine-tuning parameters to produce an MTLLM.<n>In this paper, the problem of enabling edge users to collaboratively craft such MTLMs via tasks vectors is studied, under the assumption of worst-case adversarial attacks.
arXiv Detail & Related papers (2024-11-27T10:57:06Z) - MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic [6.46176287368784]
We propose textbfModel textbfExclusive textbfTask textbfArithmetic for merging textbfGPT-scale models.
Our proposed MetaGPT is data-agnostic and bypasses the heavy search process, making it cost-effective and easy to implement for LLMs.
arXiv Detail & Related papers (2024-06-17T10:12:45Z) - MTLComb: multi-task learning combining regression and classification tasks for joint feature selection [3.708475728683911]
Multi-task learning (MTL) is a learning paradigm that enables the simultaneous training of multiple communicating algorithms.
We propose a provable loss weighting scheme that analytically determines the optimal weights for balancing regression and classification tasks.
We introduce MTLComb, an MTL algorithm and software package encompassing optimization procedures, training protocols, and hyper parameter estimation procedures.
arXiv Detail & Related papers (2024-05-16T08:07:25Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z) - Learning Functions to Study the Benefit of Multitask Learning [25.325601027501836]
We study and quantify the generalization patterns of multitask learning (MTL) models for sequence labeling tasks.
Although multitask learning has achieved improved performance in some problems, there are also tasks that lose performance when trained together.
arXiv Detail & Related papers (2020-06-09T23:51:32Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.