Sign and Relevance Learning
- URL: http://arxiv.org/abs/2110.07292v4
- Date: Tue, 12 Sep 2023 17:01:20 GMT
- Title: Sign and Relevance Learning
- Authors: Sama Daryanavard and Bernd Porr
- Abstract summary: Standard models of biologically realistic reinforcement learning employ a global error signal, which implies the use of shallow networks.
In this study, we introduce a novel network that solves this problem by propagating only the sign of the plasticity change.
Neuromodulation can be understood as a rectified error or relevance signal, while the top-down sign of the error signal determines whether long-term depression will occur.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard models of biologically realistic or biologically inspired
reinforcement learning employ a global error signal, which implies the use of
shallow networks. On the other hand, error backpropagation allows the use of
networks with multiple layers. However, precise error backpropagation is
difficult to justify in biologically realistic networks because it requires
precise weighted error backpropagation from layer to layer. In this study, we
introduce a novel network that solves this problem by propagating only the sign
of the plasticity change (i.e., LTP/LTD) throughout the whole network, while
neuromodulation controls the learning rate. Neuromodulation can be understood
as a rectified error or relevance signal, while the top-down sign of the error
signal determines whether long-term potentiation or long-term depression will
occur. To demonstrate the effectiveness of this approach, we conducted a real
robotic task as proof of concept. Our results show that this paradigm can
successfully perform complex tasks using a biologically plausible learning
mechanism.
Related papers
- Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Dis-inhibitory neuronal circuits can control the sign of synaptic
plasticity [6.227678387562755]
We show that error-modulated learning emerges naturally at the circuit level when recurrent inhibition explicitly influences Hebbian plasticity.
Our findings bridge the gap between functional and experimentally observed plasticity rules.
arXiv Detail & Related papers (2023-10-30T15:06:19Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Learning efficient backprojections across cortical hierarchies in real
time [1.6474865533365743]
We introduce a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies.
All weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses.
Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment.
arXiv Detail & Related papers (2022-12-20T13:54:04Z) - Single-phase deep learning in cortico-cortical networks [1.7249361224827535]
We introduce a new model, bursting cortico-cortical networks (BurstCCN), which integrates bursting activity, short-term plasticity and dendrite-targeting interneurons.
Our results suggest that cortical features across sub-cellular, cellular, microcircuit and systems levels jointly single-phase efficient deep learning in the brain.
arXiv Detail & Related papers (2022-06-23T15:10:57Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement
Learning Agents [0.0]
An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent.
We propose a novel algorithm called MAP propagation to reduce this variance significantly.
Our work thus allows for the broader application of the teams of agents in deep reinforcement learning.
arXiv Detail & Related papers (2020-10-15T17:17:39Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.