Cortico-cerebellar networks as decoupling neural interfaces
- URL: http://arxiv.org/abs/2110.11501v1
- Date: Thu, 21 Oct 2021 22:02:38 GMT
- Title: Cortico-cerebellar networks as decoupling neural interfaces
- Authors: Joseph Pemberton and Ellen Boven and Richard Apps and Rui Ponte Costa
- Abstract summary: The brain solves the credit assignment problem remarkably well.
For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish.
Deep learning methods suffer from similar locking constraints both on the forward and feedback phase.
Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs.
- Score: 1.1879716317856945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The brain solves the credit assignment problem remarkably well. For credit to
be assigned across neural networks they must, in principle, wait for specific
neural computations to finish. How the brain deals with this inherent locking
problem has remained unclear. Deep learning methods suffer from similar locking
constraints both on the forward and feedback phase. Recently, decoupled neural
interfaces (DNIs) were introduced as a solution to the forward and feedback
locking problems in deep networks. Here we propose that a specialised brain
region, the cerebellum, helps the cerebral cortex solve similar locking
problems akin to DNIs. To demonstrate the potential of this framework we
introduce a systems-level model in which a recurrent cortical network receives
online temporal feedback predictions from a cerebellar module. We test this
cortico-cerebellar recurrent neural network (ccRNN) model on a number of
sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition
and caption generation) that have been shown to be cerebellar-dependent. In all
tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like
behaviours, consistent with classical experimental observations. Moreover, our
model also explains recent behavioural and neuronal observations while making
several testable predictions across multiple levels. Overall, our work offers a
novel perspective on the cerebellum as a brain-wide decoupling machine for
efficient credit assignment and opens a new avenue between deep learning and
neuroscience.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Seeking Next Layer Neurons' Attention for Error-Backpropagation-Like
Training in a Multi-Agent Network Framework [6.446189857311325]
We propose a local objective for neurons that align them to exhibit similarities to error-backpropagation.
We examine a neural network comprising decentralized, self-interested neurons seeking to maximize their local objective.
We demonstrate the learning capacity of these multi-agent neural networks through experiments on three datasets.
arXiv Detail & Related papers (2023-10-15T21:07:09Z) - A Sparse Quantized Hopfield Network for Online-Continual Memory [0.0]
Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed (non-i.i.d.) way.
Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, i.i.d. setting.
We implement this kind of model in a novel neural network called the Sparse Quantized Hopfield Network (SQHN)
arXiv Detail & Related papers (2023-07-27T17:46:17Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - A Neural Network Model of Continual Learning with Cognitive Control [1.8051006704301769]
We show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked.
We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal.
Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.
arXiv Detail & Related papers (2022-02-09T23:53:05Z) - Learning by Active Forgetting for Neural Networks [36.47528616276579]
Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system.
Modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering.
This paper presents a learning model by active forgetting mechanism with artificial neural networks.
arXiv Detail & Related papers (2021-11-21T14:55:03Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.