Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule
- URL: http://arxiv.org/abs/2405.14139v1
- Date: Thu, 23 May 2024 03:28:52 GMT
- Title: Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule
- Authors: Xinhao Fan, Shreesh P Mysore,
- Abstract summary: We introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain.
We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Backpropagation (BP) has been pivotal in advancing machine learning and remains essential in computational applications and comparative studies of biological and artificial neural networks. Despite its widespread use, the implementation of BP in the brain remains elusive, and its biological plausibility is often questioned due to inherent issues such as the need for symmetry of weights between forward and backward connections, and the requirement of distinct forward and backward phases of computation. Here, we introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain. Similar in general form to the classical Hebbian rule, this rule is based on the core principles of maintaining the balance of excitatory and inhibitory inputs as well as on retrograde signaling, and operates over three progressively slower timescales: neural firing, retrograde signaling, and neural plasticity. We hypothesize that each neuron possesses an internal state, termed credit, in addition to its firing rate. After achieving equilibrium in firing rates, neurons receive credits based on their contribution to the E-I balance of postsynaptic neurons through retrograde signaling. As the network's credit distribution stabilizes, connections from those presynaptic neurons are strengthened that significantly contribute to the balance of postsynaptic neurons. We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations. Simulations on artificial neural networks reveal that this rule induces varying community structures in networks, depending on the learning rate. This simple theoretical framework presents a biologically plausible implementation of BP, with testable assumptions and predictions that may be evaluated through biological experiments.
Related papers
- The Neuron as a Direct Data-Driven Controller [43.8450722109081]
This study extends the current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers.
We model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states and optimize control.
Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a novel and biologically-informed fundamental unit for constructing neural networks.
arXiv Detail & Related papers (2024-01-03T01:24:10Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs [12.357635939839696]
We investigate the use of "gradient-estimating" evolutionary algorithms for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP.
Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP methods.
arXiv Detail & Related papers (2023-11-17T20:59:57Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Predictive coding in balanced neural networks with noise, chaos and
delays [24.76770648963407]
We introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated.
Our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes.
arXiv Detail & Related papers (2020-06-25T05:03:27Z) - Equilibrium Propagation for Complete Directed Neural Networks [0.0]
Most successful learning algorithm for artificial neural networks, backpropagation, is considered biologically implausible.
We contribute to the topic of biologically plausible neuronal learning by building upon and extending the equilibrium propagation learning framework.
arXiv Detail & Related papers (2020-06-15T22:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.