Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning
- URL: http://arxiv.org/abs/2409.19841v2
- Date: Wed, 23 Oct 2024 16:27:27 GMT
- Title: Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning
- Authors: Chia-Hsiang Kao, Bharath Hariharan,
- Abstract summary: error backpropagation has faced criticism for its lack of biological plausibility.
We propose counter-current learning (CCL), a biologically plausible framework for credit assignment in neural networks.
Our work presents a direction for biologically inspired and plausible learning algorithms, offering an alternative mechanism of learning and adaptation in neural networks.
- Score: 32.122425860826525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite its widespread use in neural networks, error backpropagation has faced criticism for its lack of biological plausibility, suffering from issues such as the backward locking problem and the weight transport problem. These limitations have motivated researchers to explore more biologically plausible learning algorithms that could potentially shed light on how biological neural systems adapt and learn. Inspired by the counter-current exchange mechanisms observed in biological systems, we propose counter-current learning (CCL), a biologically plausible framework for credit assignment in neural networks. This framework employs a feedforward network to process input data and a feedback network to process targets, with each network enhancing the other through anti-parallel signal propagation. By leveraging the more informative signals from the bottom layer of the feedback network to guide the updates of the top layer of the feedforward network and vice versa, CCL enables the simultaneous transformation of source inputs to target outputs and the dynamic mutual influence of these transformations. Experimental results on MNIST, FashionMNIST, CIFAR10, and CIFAR100 datasets using multi-layer perceptrons and convolutional neural networks demonstrate that CCL achieves comparable performance to other biologically plausible algorithms while offering a more biologically realistic learning mechanism. Furthermore, we showcase the applicability of our approach to an autoencoder task, underscoring its potential for unsupervised representation learning. Our work presents a direction for biologically inspired and plausible learning algorithms, offering an alternative mechanism of learning and adaptation in neural networks.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Towards Biologically Plausible Computing: A Comprehensive Comparison [24.299920289520013]
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning.
The biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training.
In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet.
arXiv Detail & Related papers (2024-06-23T09:51:20Z) - CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration [7.26259898628108]
The present work aims at proving mathematically that a neural network inspired by biology can learn a classification task thanks to local transformations only.
We propose a spiking neural network named CHANI, whose neurons activity is modeled by Hawkes processes.
arXiv Detail & Related papers (2024-05-29T07:17:58Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Biologically Plausible Training of Deep Neural Networks Using a Top-down
Credit Assignment Network [32.575847142016585]
Top-Down Credit Assignment Network (TDCA-network) is designed to train a bottom-up network using a Top-Down Credit Assignment Network (TDCA-network)
TDCA-network serves as a substitute for the conventional loss function and the back-propagation algorithm, widely used in neural network training.
The results indicate TDCA-network holds promising potential to train neural networks across diverse datasets.
arXiv Detail & Related papers (2022-08-01T07:14:37Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Learning in Deep Neural Networks Using a Biologically Inspired Optimizer [5.144809478361604]
We propose a novel biologically inspired for artificial (ANNs) and spiking neural networks (SNNs)
GRAPES implements a weight-distribution dependent modulation of the error signal at each node of the neural network.
We show that this biologically inspired mechanism leads to a systematic improvement of the convergence rate of the network, and substantially improves classification accuracy of ANNs and SNNs.
arXiv Detail & Related papers (2021-04-23T13:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.