Multi-Task Neural Processes
- URL: http://arxiv.org/abs/2111.05820v1
- Date: Wed, 10 Nov 2021 17:27:46 GMT
- Title: Multi-Task Neural Processes
- Authors: Jiayi Shen, Xiantong Zhen, Marcel Worring, Ling Shao
- Abstract summary: We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
- Score: 105.22406384964144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural processes have recently emerged as a class of powerful neural latent
variable models that combine the strengths of neural networks and stochastic
processes. As they can encode contextual data in the network's function space,
they offer a new way to model task relatedness in multi-task learning. To study
its potential, we develop multi-task neural processes, a new variant of neural
processes for multi-task learning. In particular, we propose to explore
transferable knowledge from related tasks in the function space to provide
inductive bias for improving each individual task. To do so, we derive the
function priors in a hierarchical Bayesian inference framework, which enables
each task to incorporate the shared knowledge provided by related tasks into
its context of the prediction function. Our multi-task neural processes
methodologically expand the scope of vanilla neural processes and provide a new
way of exploring task relatedness in function spaces for multi-task learning.
The proposed multi-task neural processes are capable of learning multiple tasks
with limited labeled data and in the presence of domain shift. We perform
extensive experimental evaluations on several benchmarks for the multi-task
regression and classification tasks. The results demonstrate the effectiveness
of multi-task neural processes in transferring useful knowledge among tasks for
multi-task learning and superior performance in multi-task classification and
brain image segmentation.
Related papers
- Towards Understanding Multi-Task Learning (Generalization) of LLMs via Detecting and Exploring Task-Specific Neurons [45.04661608619081]
We detect task-sensitive neurons in large language models (LLMs) via gradient attribution on task-specific data.
We find that the overlap of task-specific neurons is strongly associated with generalization and specialization across tasks.
We propose a neuron-level continuous fine-tuning method that only fine-tunes the current task-specific neurons during continuous learning.
arXiv Detail & Related papers (2024-07-09T01:27:35Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - On the relationship between disentanglement and multi-task learning [62.997667081978825]
We take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing.
We show that disentanglement appears naturally during the process of multi-task neural network training.
arXiv Detail & Related papers (2021-10-07T14:35:34Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Efficient and robust multi-task learning in the brain with modular task
primitives [2.6166087473624318]
We show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.
We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies.
arXiv Detail & Related papers (2021-05-28T21:07:54Z) - Learning Rates for Multi-task Regularization Networks [7.799917891986168]
Multi-task learning is an important trend in machine learning in facing the era of artificial intelligence and big data.
We present mathematical analysis on the learning rate estimate of multi-task learning based on the theory of vector-valued reproducing kernel Hilbert spaces and matrix-valued reproducing kernels.
It reveals that the generalization ability of multi-task learning algorithms is indeed affected as the number of tasks increases.
arXiv Detail & Related papers (2021-04-01T13:10:29Z) - Multi-Task Learning with Deep Neural Networks: A Survey [0.0]
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model.
We give an overview of multi-task learning methods for deep neural networks, with the aim of summarizing both the well-established and most recent directions within the field.
arXiv Detail & Related papers (2020-09-10T19:31:04Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Navigating the Trade-Off between Multi-Task Learning and Learning to
Multitask in Deep Neural Networks [9.278739724750343]
Multi-task learning refers to a paradigm in machine learning in which a network is trained on various related tasks to facilitate the acquisition of tasks.
multitasking is used to indicate, especially in the cognitive science literature, the ability to execute multiple tasks simultaneously.
We show that the same tension arises in deep networks and discuss a meta-learning algorithm for an agent to manage this trade-off in an unfamiliar environment.
arXiv Detail & Related papers (2020-07-20T23:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.