Toward Practical Equilibrium Propagation: Brain-inspired Recurrent Neural Network with Feedback Regulation and Residual Connections
- URL: http://arxiv.org/abs/2508.11659v1
- Date: Tue, 05 Aug 2025 15:07:50 GMT
- Title: Toward Practical Equilibrium Propagation: Brain-inspired Recurrent Neural Network with Feedback Regulation and Residual Connections
- Authors: Zhuo Liu, Tao Chen,
- Abstract summary: We propose a biologically plau-sible Feedback-regulated REsidual recurrent neural network (FRE-RNN) to study its learning performance.<n>The improvement in con-vergence property reduces the computational cost and train-ing time of EP by orders of magnitude.<n>Our approach substantially enhances the applicabil-ity and practicality of EP in large-scale networks that un-derpin artificial intelligence.
- Score: 7.464380138405363
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Brain-like intelligent systems need brain-like learning methods. Equilibrium Propagation (EP) is a biologically plausible learning framework with strong potential for brain-inspired computing hardware. However, existing im-plementations of EP suffer from instability and prohibi-tively high computational costs. Inspired by the structure and dynamics of the brain, we propose a biologically plau-sible Feedback-regulated REsidual recurrent neural network (FRE-RNN) and study its learning performance in EP framework. Feedback regulation enables rapid convergence by reducing the spectral radius. The improvement in con-vergence property reduces the computational cost and train-ing time of EP by orders of magnitude, delivering perfor-mance on par with backpropagation (BP) in benchmark tasks. Meanwhile, residual connections with brain-inspired topologies help alleviate the vanishing gradient problem that arises when feedback pathways are weak in deep RNNs. Our approach substantially enhances the applicabil-ity and practicality of EP in large-scale networks that un-derpin artificial intelligence. The techniques developed here also offer guidance to implementing in-situ learning in physical neural networks.
Related papers
- General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - Dopamine-driven synaptic credit assignment in neural networks [0.0]
solve the synaptic Credit Assignment Problem(CAP)<n>Dopamine is developed for Weight Perturbation learning that exploits updating of weights towards optima.<n> tested the Dopamine for training multi-layered perceptrons for XOR tasks, and recurrent neural networks for chaotic time series forecasting.
arXiv Detail & Related papers (2025-10-25T06:17:49Z) - Training Deep Normalization-Free Spiking Neural Networks with Lateral Inhibition [52.59263087086756]
Training deep neural networks (SNNs) has critically depended on explicit normalization schemes, such as batch normalization.<n>We propose a normalization-free learning framework that incorporates lateral inhibition inspired by cortical circuits.<n>We show that our framework enables stable training of deep SNNs with biological realism and achieves competitive performance without resorting to explicit normalizations.
arXiv Detail & Related papers (2025-09-27T11:11:30Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule [0.0]
We introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain.<n>We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations.
arXiv Detail & Related papers (2024-05-23T03:28:52Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - An Unsupervised STDP-based Spiking Neural Network Inspired By
Biologically Plausible Learning Rules and Connections [10.188771327458651]
Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly.
We design an adaptive synaptic filter and introduce the adaptive spiking threshold to enrich the representation ability of SNNs.
Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-07-06T14:53:32Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Biologically-inspired neuronal adaptation improves learning in neural
networks [0.7734726150561086]
Humans still outperform artificial neural networks on many tasks.
We draw inspiration from the brain to improve machine learning algorithms.
We add adaptation to multilayer perceptrons and convolutional neural networks trained on MNIST and CIFAR-10.
arXiv Detail & Related papers (2022-04-08T16:16:02Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Learning in Deep Neural Networks Using a Biologically Inspired Optimizer [5.144809478361604]
We propose a novel biologically inspired for artificial (ANNs) and spiking neural networks (SNNs)
GRAPES implements a weight-distribution dependent modulation of the error signal at each node of the neural network.
We show that this biologically inspired mechanism leads to a systematic improvement of the convergence rate of the network, and substantially improves classification accuracy of ANNs and SNNs.
arXiv Detail & Related papers (2021-04-23T13:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.