Sign and Relevance Learning
- URL: http://arxiv.org/abs/2110.07292v4
- Date: Tue, 12 Sep 2023 17:01:20 GMT
- Title: Sign and Relevance Learning
- Authors: Sama Daryanavard and Bernd Porr
- Abstract summary: Standard models of biologically realistic reinforcement learning employ a global error signal, which implies the use of shallow networks.
In this study, we introduce a novel network that solves this problem by propagating only the sign of the plasticity change.
Neuromodulation can be understood as a rectified error or relevance signal, while the top-down sign of the error signal determines whether long-term depression will occur.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard models of biologically realistic or biologically inspired
reinforcement learning employ a global error signal, which implies the use of
shallow networks. On the other hand, error backpropagation allows the use of
networks with multiple layers. However, precise error backpropagation is
difficult to justify in biologically realistic networks because it requires
precise weighted error backpropagation from layer to layer. In this study, we
introduce a novel network that solves this problem by propagating only the sign
of the plasticity change (i.e., LTP/LTD) throughout the whole network, while
neuromodulation controls the learning rate. Neuromodulation can be understood
as a rectified error or relevance signal, while the top-down sign of the error
signal determines whether long-term potentiation or long-term depression will
occur. To demonstrate the effectiveness of this approach, we conducted a real
robotic task as proof of concept. Our results show that this paradigm can
successfully perform complex tasks using a biologically plausible learning
mechanism.
Related papers
- Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning [32.122425860826525]
error backpropagation has faced criticism for its lack of biological plausibility.
We propose counter-current learning (CCL), a biologically plausible framework for credit assignment in neural networks.
Our work presents a direction for biologically inspired and plausible learning algorithms, offering an alternative mechanism of learning and adaptation in neural networks.
arXiv Detail & Related papers (2024-09-30T00:47:13Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Learning efficient backprojections across cortical hierarchies in real
time [1.6474865533365743]
We introduce a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies.
All weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses.
Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment.
arXiv Detail & Related papers (2022-12-20T13:54:04Z) - Single-phase deep learning in cortico-cortical networks [1.7249361224827535]
We introduce a new model, bursting cortico-cortical networks (BurstCCN), which integrates bursting activity, short-term plasticity and dendrite-targeting interneurons.
Our results suggest that cortical features across sub-cellular, cellular, microcircuit and systems levels jointly single-phase efficient deep learning in the brain.
arXiv Detail & Related papers (2022-06-23T15:10:57Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement
Learning Agents [0.0]
An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent.
We propose a novel algorithm called MAP propagation to reduce this variance significantly.
Our work thus allows for the broader application of the teams of agents in deep reinforcement learning.
arXiv Detail & Related papers (2020-10-15T17:17:39Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.