Efficient and robust multi-task learning in the brain with modular task
primitives
- URL: http://arxiv.org/abs/2105.14108v1
- Date: Fri, 28 May 2021 21:07:54 GMT
- Title: Efficient and robust multi-task learning in the brain with modular task
primitives
- Authors: Christian David Marton, Guillaume Lajoie, Kanaka Rajan
- Abstract summary: We show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.
We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies.
- Score: 2.6166087473624318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a real-world setting biological agents do not have infinite resources to
learn new things. It is thus useful to recycle previously acquired knowledge in
a way that allows for faster, less resource-intensive acquisition of multiple
new skills. Neural networks in the brain are likely not entirely re-trained
with new tasks, but how they leverage existing computations to learn new tasks
is not well understood. In this work, we study this question in artificial
neural networks trained on commonly used neuroscience paradigms. Building on
recent work from the multi-task learning literature, we propose two
ingredients: (1) network modularity, and (2) learning task primitives.
Together, these ingredients form inductive biases we call structural and
functional, respectively. Using a corpus of nine different tasks, we show that
a modular network endowed with task primitives allows for learning multiple
tasks well while keeping parameter counts, and updates, low. We also show that
the skills acquired with our approach are more robust to a broad range of
perturbations compared to those acquired with other multi-task learning
strategies. This work offers a new perspective on achieving efficient
multi-task learning in the brain, and makes predictions for novel neuroscience
experiments in which targeted perturbations are employed to explore solution
spaces.
Related papers
- Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Continual task learning in natural and artificial agents [4.726777092009554]
A wave of brain recording studies has investigated how neural representations change during task learning.
We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex.
We discuss how ideas from machine learning are helping neuroscientists understand how natural tasks are learned and coded in biological brains.
arXiv Detail & Related papers (2022-10-10T09:36:08Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - Towards continual task learning in artificial neural networks: current
approaches and insights from neuroscience [0.0]
The innate capacity of humans and other animals to learn a diverse, and often interfering, range of knowledge is a hallmark of natural intelligence.
The ability of artificial neural networks to learn across a range of tasks and domains is a clear goal of artificial intelligence.
arXiv Detail & Related papers (2021-12-28T13:50:51Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - One Network Fits All? Modular versus Monolithic Task Formulations in
Neural Networks [36.07011014271394]
We show that a single neural network is capable of simultaneously learning multiple tasks from a combined data set.
We study how the complexity of learning such combined tasks grows with the complexity of the task codes.
arXiv Detail & Related papers (2021-03-29T01:16:42Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Efficient Architecture Search for Continual Learning [36.998565674813285]
Continual learning with neural networks aims to learn a sequence of tasks well.
It is often confronted with three challenges: (1) overcome the catastrophic forgetting problem, (2) adapt the current network to new tasks, and (3) control its model complexity.
We propose a novel approach named as Continual Learning with Efficient Architecture Search, or CLEAS in short.
arXiv Detail & Related papers (2020-06-07T02:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.