Meta-Learning Biologically Plausible Plasticity Rules with Random
Feedback Pathways
- URL: http://arxiv.org/abs/2210.16414v1
- Date: Fri, 28 Oct 2022 21:40:56 GMT
- Title: Meta-Learning Biologically Plausible Plasticity Rules with Random
Feedback Pathways
- Authors: Navid Shervani-Tabar and Robert Rosenbaum
- Abstract summary: We develop a novel meta-plasticity approach to discover interpretable, biologically plausible plasticity rules.
Our results highlight the potential of meta-plasticity to discover effective, interpretable learning rules satisfying biological constraints.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backpropagation is widely used to train artificial neural networks, but its
relationship to synaptic plasticity in the brain is unknown. Some biological
models of backpropagation rely on feedback projections that are symmetric with
feedforward connections, but experiments do not corroborate the existence of
such symmetric backward connectivity. Random feedback alignment offers an
alternative model in which errors are propagated backward through fixed, random
backward connections. This approach successfully trains shallow models, but
learns slowly and does not perform well with deeper models or online learning.
In this study, we develop a novel meta-plasticity approach to discover
interpretable, biologically plausible plasticity rules that improve online
learning performance with fixed random feedback connections. The resulting
plasticity rules show improved online training of deep models in the low data
regime. Our results highlight the potential of meta-plasticity to discover
effective, interpretable learning rules satisfying biological constraints.
Related papers
- On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning [0.0]
Spiking neural networks (SNN) are considered as a perspective basis for performing all kinds of learning tasks - unsupervised, supervised and reinforcement learning.
Learning in SNN is implemented through synaptic plasticity - the rules which determine dynamics of synaptic weights depending usually on activity of the pre- and post-synaptic neurons.
arXiv Detail & Related papers (2021-11-12T15:26:52Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Unveiling the role of plasticity rules in reservoir computing [0.0]
Reservoir Computing (RC) is an appealing approach in Machine Learning.
We analyze the role that plasticity rules play on the changes that lead to a better performance of RC.
arXiv Detail & Related papers (2021-01-14T19:55:30Z) - A More Biologically Plausible Local Learning Rule for ANNs [6.85316573653194]
The proposed learning rule is derived from the concepts of spike timing dependant plasticity and neuronal association.
A preliminary evaluation done on the binary classification of MNIST and IRIS datasets shows comparable performance with backpropagation.
The local nature of learning gives a possibility of large scale distributed and parallel learning in the network.
arXiv Detail & Related papers (2020-11-24T10:35:47Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z) - Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks [0.0]
Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval.
Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation.
Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable.
Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient
arXiv Detail & Related papers (2020-05-22T02:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.