Minimizing Control for Credit Assignment with Strong Feedback
- URL: http://arxiv.org/abs/2204.07249v1
- Date: Thu, 14 Apr 2022 22:06:21 GMT
- Title: Minimizing Control for Credit Assignment with Strong Feedback
- Authors: Alexander Meulemans, Matilde Tristany Farinha, Maria R. Cervera,
Jo\~ao Sacramento, Benjamin F. Grewe
- Abstract summary: Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
- Score: 65.59995261310529
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The success of deep learning attracted interest in whether the brain learns
hierarchical representations using gradient-based learning. However, current
biologically plausible methods for gradient-based credit assignment in deep
neural networks need infinitesimally small feedback signals, which is
problematic in biologically realistic noisy environments and at odds with
experimental evidence in neuroscience showing that top-down feedback can
significantly influence neural activity. Building upon deep feedback control
(DFC), a recently proposed credit assignment method, we combine strong feedback
influences on neural activity with gradient-based learning and show that this
naturally leads to a novel view on neural network optimization. Instead of
gradually changing the network weights towards configurations with low output
loss, weight updates gradually minimize the amount of feedback required from a
controller that drives the network to the supervised output label. Moreover, we
show that the use of strong feedback in DFC allows learning forward and
feedback connections simultaneously, using a learning rule fully local in space
and time. We complement our theoretical results with experiments on standard
computer-vision benchmarks, showing competitive performance to backpropagation
as well as robustness to noise. Overall, our work presents a fundamentally
novel view of learning as control minimization, while sidestepping biologically
unrealistic assumptions.
Related papers
- Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning [32.122425860826525]
error backpropagation has faced criticism for its lack of biological plausibility.
We propose counter-current learning (CCL), a biologically plausible framework for credit assignment in neural networks.
Our work presents a direction for biologically inspired and plausible learning algorithms, offering an alternative mechanism of learning and adaptation in neural networks.
arXiv Detail & Related papers (2024-09-30T00:47:13Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Layer-wise Feedback Propagation [53.00944147633484]
We present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors.
LFP assigns rewards to individual connections based on their respective contributions to solving a given task.
We demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Observer-Feedback-Feedforward Controller Structures in Reinforcement
Learning [0.0]
The paper proposes the use of structured neural networks for reinforcement learning based nonlinear adaptive control.
The focus is on partially observable systems, with separate neural networks for the state and feedforward observer and the state feedback and feedforward controller.
arXiv Detail & Related papers (2023-04-20T12:59:21Z) - Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation [22.18972584098911]
Fully test-time adaptation aims to adapt the network model based on sequential analysis of input samples during the inference stage.
We take inspiration from the biological plausibility learning where the neuron responses are tuned based on a local synapse-change procedure.
We design a soft Hebbian learning process which provides an unsupervised and effective mechanism for online adaptation.
arXiv Detail & Related papers (2023-03-02T02:18:56Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.