Multitask learning over graphs: An Approach for Distributed, Streaming
Machine Learning
- URL: http://arxiv.org/abs/2001.02112v2
- Date: Tue, 21 Sep 2021 14:23:01 GMT
- Title: Multitask learning over graphs: An Approach for Distributed, Streaming
Machine Learning
- Authors: Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, and Ali H.
Sayed
- Abstract summary: Multitask learning is an approach to inductive transfer learning.
Recent years have witnessed an increasing ability to collect data in a distributed and streaming manner.
This requires the design of new strategies for learning jointly multiple tasks from streaming data over distributed (or networked) systems.
- Score: 46.613346075513206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of learning simultaneously several related tasks has received
considerable attention in several domains, especially in machine learning with
the so-called multitask learning problem or learning to learn problem [1], [2].
Multitask learning is an approach to inductive transfer learning (using what is
learned for one problem to assist in another problem) and helps improve
generalization performance relative to learning each task separately by using
the domain information contained in the training signals of related tasks as an
inductive bias. Several strategies have been derived within this community
under the assumption that all data are available beforehand at a fusion center.
However, recent years have witnessed an increasing ability to collect data in a
distributed and streaming manner. This requires the design of new strategies
for learning jointly multiple tasks from streaming data over distributed (or
networked) systems. This article provides an overview of multitask strategies
for learning and adaptation over networks. The working hypothesis for these
strategies is that agents are allowed to cooperate with each other in order to
learn distinct, though related tasks. The article shows how cooperation steers
the network limiting point and how different cooperation rules allow to promote
different task relatedness models. It also explains how and when cooperation
over multitask networks outperforms non-cooperative strategies.
Related papers
- Efficient Computation Sharing for Multi-Task Visual Scene Understanding [16.727967046330125]
Multi-task learning can conserve resources by sharing knowledge across different tasks.
We present a novel- and parameter-sharing framework that balances efficiency and accuracy to perform multiple visual tasks.
arXiv Detail & Related papers (2023-03-16T21:47:40Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Gap Minimization for Knowledge Sharing and Transfer [24.954256258648982]
In this paper, we introduce the notion of emphperformance gap, an intuitive and novel measure of the distance between learning tasks.
We show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
We instantiate this principle with two algorithms: 1. gapBoost, a novel and principled boosting algorithm that explicitly minimizes the performance gap between source and target domains for transfer learning; and 2. gapMTNN, a representation learning algorithm that reformulates gap minimization as semantic conditional matching
arXiv Detail & Related papers (2022-01-26T23:06:20Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Multi-Task Learning with Deep Neural Networks: A Survey [0.0]
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model.
We give an overview of multi-task learning methods for deep neural networks, with the aim of summarizing both the well-established and most recent directions within the field.
arXiv Detail & Related papers (2020-09-10T19:31:04Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z) - Navigating the Trade-Off between Multi-Task Learning and Learning to
Multitask in Deep Neural Networks [9.278739724750343]
Multi-task learning refers to a paradigm in machine learning in which a network is trained on various related tasks to facilitate the acquisition of tasks.
multitasking is used to indicate, especially in the cognitive science literature, the ability to execute multiple tasks simultaneously.
We show that the same tension arises in deep networks and discuss a meta-learning algorithm for an agent to manage this trade-off in an unfamiliar environment.
arXiv Detail & Related papers (2020-07-20T23:26:16Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.