Alternate Training of Shared and Task-Specific Parameters for Multi-Task
Neural Networks
- URL: http://arxiv.org/abs/2312.16340v1
- Date: Tue, 26 Dec 2023 21:33:03 GMT
- Title: Alternate Training of Shared and Task-Specific Parameters for Multi-Task
Neural Networks
- Authors: Stefania Bellavia, Francesco Della Santa, Alessandra Papini
- Abstract summary: This paper introduces novel alternate training procedures for hard- parameter sharing Multi-Task Neural Networks (MTNNs)
The proposed alternate training method updates shared and task-specific weights alternately, exploiting the multi-head architecture of the model.
Empirical experiments demonstrate delayed overfitting, improved prediction, and reduced computational demands.
- Score: 49.1574468325115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces novel alternate training procedures for hard-parameter
sharing Multi-Task Neural Networks (MTNNs). Traditional MTNN training faces
challenges in managing conflicting loss gradients, often yielding sub-optimal
performance. The proposed alternate training method updates shared and
task-specific weights alternately, exploiting the multi-head architecture of
the model. This approach reduces computational costs, enhances training
regularization, and improves generalization. Convergence properties similar to
those of the classical stochastic gradient method are established. Empirical
experiments demonstrate delayed overfitting, improved prediction, and reduced
computational demands. In summary, our alternate training procedures offer a
promising advancement for the training of hard-parameter sharing MTNNs.
Related papers
- An Augmented Backward-Corrected Projector Splitting Integrator for Dynamical Low-Rank Training [47.69709732622765]
We introduce a novel low-rank training method that reduces the number of required QR decompositions.
Our approach integrates an augmentation step into a projector-splitting scheme, ensuring convergence to a locally optimal solution.
arXiv Detail & Related papers (2025-02-05T09:03:50Z) - Optimizing Dense Visual Predictions Through Multi-Task Coherence and Prioritization [7.776434991976473]
Multi-Task Learning (MTL) involves the concurrent training of multiple tasks.
We propose an advanced MTL model specifically designed for dense vision tasks.
arXiv Detail & Related papers (2024-12-04T10:05:47Z) - Proactive Gradient Conflict Mitigation in Multi-Task Learning: A Sparse Training Perspective [33.477681689943516]
A common issue in multi-task learning is the occurrence of gradient conflict.
We propose a strategy to reduce such conflicts through sparse training (ST)
Our experiments demonstrate that ST effectively mitigates conflicting gradients and leads to superior performance.
arXiv Detail & Related papers (2024-11-27T18:58:22Z) - LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging [80.17238673443127]
LiNeS is a post-training editing technique designed to preserve pre-trained generalization while enhancing fine-tuned task performance.
LiNeS demonstrates significant improvements in both single-task and multi-task settings across various benchmarks in vision and natural language processing.
arXiv Detail & Related papers (2024-10-22T16:26:05Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - A Dirichlet Process Mixture of Robust Task Models for Scalable Lifelong
Reinforcement Learning [11.076005074172516]
reinforcement learning algorithms can easily encounter catastrophic forgetting or interference when faced with lifelong streaming information.
We propose a scalable lifelong RL method that dynamically expands the network capacity to accommodate new knowledge.
We show that our method successfully facilitates scalable lifelong RL and outperforms relevant existing methods.
arXiv Detail & Related papers (2022-05-22T09:48:41Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z) - An Optimization-Based Meta-Learning Model for MRI Reconstruction with
Diverse Dataset [4.9259403018534496]
We develop a generalizable MRI reconstruction model in the meta-learning framework.
The proposed network learns regularization function in a learner adaptional model.
We test the result of quick training on the unseen tasks after meta-training and in the saving half of the time.
arXiv Detail & Related papers (2021-10-02T03:21:52Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.