Auxiliary Learning as an Asymmetric Bargaining Game
- URL: http://arxiv.org/abs/2301.13501v2
- Date: Mon, 5 Jun 2023 07:37:19 GMT
- Title: Auxiliary Learning as an Asymmetric Bargaining Game
- Authors: Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik,
Ethan Fetaya
- Abstract summary: We propose a novel approach, named AuxiNash, for balancing tasks in auxiliary learning.
We describe an efficient procedure for learning the bargaining power of tasks based on their contribution to the performance of the main task.
We evaluate AuxiNash on multiple multi-task benchmarks and find that it consistently outperforms competing methods.
- Score: 50.826710465264505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Auxiliary learning is an effective method for enhancing the generalization
capabilities of trained models, particularly when dealing with small datasets.
However, this approach may present several difficulties: (i) optimizing
multiple objectives can be more challenging, and (ii) how to balance the
auxiliary tasks to best assist the main task is unclear. In this work, we
propose a novel approach, named AuxiNash, for balancing tasks in auxiliary
learning by formalizing the problem as generalized bargaining game with
asymmetric task bargaining power. Furthermore, we describe an efficient
procedure for learning the bargaining power of tasks based on their
contribution to the performance of the main task and derive theoretical
guarantees for its convergence. Finally, we evaluate AuxiNash on multiple
multi-task benchmarks and find that it consistently outperforms competing
methods.
Related papers
- Sharing Knowledge in Multi-Task Deep Reinforcement Learning [57.38874587065694]
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.
We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks.
arXiv Detail & Related papers (2024-01-17T19:31:21Z) - Auxiliary task discovery through generate-and-test [7.800263769988046]
Auxiliary tasks improve data efficiency by forcing the agent to learn auxiliary prediction and control objectives.
In this paper, we explore an approach to auxiliary task discovery in reinforcement learning based on ideas from representation learning.
We introduce a new measure of auxiliary tasks' usefulness based on how useful the features induced by them are for the main task.
arXiv Detail & Related papers (2022-10-25T22:04:37Z) - Composite Learning for Robust and Effective Dense Predictions [81.2055761433725]
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
arXiv Detail & Related papers (2022-10-13T17:59:16Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors [52.37504333689262]
Using noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning during the same training.
arXiv Detail & Related papers (2021-12-02T08:40:42Z) - Auxiliary Task Reweighting for Minimum-data Learning [118.69683270159108]
Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce.
To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task.
We propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task.
arXiv Detail & Related papers (2020-10-16T08:45:37Z) - A Brief Review of Deep Multi-task Learning and Auxiliary Task Learning [0.0]
Multi-task learning (MTL) optimize several learning tasks simultaneously.
Auxiliary tasks can be added to the main task to boost the performance.
arXiv Detail & Related papers (2020-07-02T14:23:39Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.