Continual task learning in natural and artificial agents
- URL: http://arxiv.org/abs/2210.04520v1
- Date: Mon, 10 Oct 2022 09:36:08 GMT
- Title: Continual task learning in natural and artificial agents
- Authors: Timo Flesch, Andrew Saxe, Christopher Summerfield
- Abstract summary: A wave of brain recording studies has investigated how neural representations change during task learning.
We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex.
We discuss how ideas from machine learning are helping neuroscientists understand how natural tasks are learned and coded in biological brains.
- Score: 4.726777092009554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How do humans and other animals learn new tasks? A wave of brain recording
studies has investigated how neural representations change during task
learning, with a focus on how tasks can be acquired and coded in ways that
minimise mutual interference. We review recent work that has explored the
geometry and dimensionality of neural task representations in neocortex, and
computational models that have exploited these findings to understand how the
brain may partition knowledge between tasks. We discuss how ideas from machine
learning, including those that combine supervised and unsupervised learning,
are helping neuroscientists understand how natural tasks are learned and coded
in biological brains.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Exploring a Cognitive Architecture for Learning Arithmetic Equations [0.0]
This paper explores the cognitive mechanisms powering arithmetic learning.
I implement a number vectorization embedding network and an associative memory model to investigate how an intelligent system can learn and recall arithmetic equations.
I aim to contribute to ongoing research into the neural correlates of mathematical cognition in intelligent systems.
arXiv Detail & Related papers (2024-05-05T18:42:00Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - Efficient and robust multi-task learning in the brain with modular task
primitives [2.6166087473624318]
We show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.
We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies.
arXiv Detail & Related papers (2021-05-28T21:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.