Towards continual task learning in artificial neural networks: current
approaches and insights from neuroscience
- URL: http://arxiv.org/abs/2112.14146v1
- Date: Tue, 28 Dec 2021 13:50:51 GMT
- Title: Towards continual task learning in artificial neural networks: current
approaches and insights from neuroscience
- Authors: David McCaffary
- Abstract summary: The innate capacity of humans and other animals to learn a diverse, and often interfering, range of knowledge is a hallmark of natural intelligence.
The ability of artificial neural networks to learn across a range of tasks and domains is a clear goal of artificial intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The innate capacity of humans and other animals to learn a diverse, and often
interfering, range of knowledge and skills throughout their lifespan is a
hallmark of natural intelligence, with obvious evolutionary motivations. In
parallel, the ability of artificial neural networks (ANNs) to learn across a
range of tasks and domains, combining and re-using learned representations
where required, is a clear goal of artificial intelligence. This capacity,
widely described as continual learning, has become a prolific subfield of
research in machine learning. Despite the numerous successes of deep learning
in recent years, across domains ranging from image recognition to machine
translation, such continual task learning has proved challenging. Neural
networks trained on multiple tasks in sequence with stochastic gradient descent
often suffer from representational interference, whereby the learned weights
for a given task effectively overwrite those of previous tasks in a process
termed catastrophic forgetting. This represents a major impediment to the
development of more generalised artificial learning systems, capable of
accumulating knowledge over time and task space, in a manner analogous to
humans. A repository of selected papers and implementations accompanying this
review can be found at https://github.com/mccaffary/continual-learning.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Brain-inspired learning in artificial neural networks: a review [5.064447369892274]
We review current brain-inspired learning representations in artificial neural networks.
We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks' capabilities.
arXiv Detail & Related papers (2023-05-18T18:34:29Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Continual learning of quantum state classification with gradient
episodic memory [0.20646127669654826]
A phenomenon called catastrophic forgetting emerges when a machine learning model is trained across multiple tasks.
Some continual learning strategies have been proposed to address the catastrophic forgetting problem.
In this work, we incorporate the gradient episodic memory method to train a variational quantum classifier.
arXiv Detail & Related papers (2022-03-26T09:28:26Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Learning to acquire novel cognitive tasks with evolution, plasticity and
meta-meta-learning [3.8073142980733]
In meta-learning, networks are trained with external algorithms to learn tasks that require acquiring, storing and exploiting unpredictable information for each new instance of the task.
Here we evolve neural networks, endowed with plastic connections, over a sizable set of simple meta-learning tasks based on a neuroscience modelling framework.
The resulting evolved network can automatically acquire a novel simple cognitive task, never seen during training, through the spontaneous operation of its evolved neural organization and plasticity structure.
arXiv Detail & Related papers (2021-12-16T03:18:01Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - Efficient and robust multi-task learning in the brain with modular task
primitives [2.6166087473624318]
We show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.
We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies.
arXiv Detail & Related papers (2021-05-28T21:07:54Z) - Beneficial Perturbation Network for designing general adaptive
artificial intelligence systems [14.226973149346886]
We propose a new type of deep neural network with extra, out-of-network, task-dependent biasing units to accommodate dynamic situations.
Our approach is memory-efficient and parameter-efficient, can accommodate many tasks, and achieves state-of-the-art performance across different tasks and domains.
arXiv Detail & Related papers (2020-09-27T01:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.