A Computational Framework of Cortical Microcircuits Approximates
Sign-concordant Random Backpropagation
- URL: http://arxiv.org/abs/2205.07292v1
- Date: Sun, 15 May 2022 14:22:03 GMT
- Title: A Computational Framework of Cortical Microcircuits Approximates
Sign-concordant Random Backpropagation
- Authors: Yukun Yang, Peng Li
- Abstract summary: We propose a hypothetical framework consisting of a new microcircuit architecture and its supporting Hebbian learning rules.
We employ the Hebbian rule operating in local compartments to update synaptic weights and achieve supervised learning in a biologically plausible manner.
The proposed framework is benchmarked on several datasets including MNIST and CIFAR10, demonstrating promising BP-comparable accuracy.
- Score: 7.601127912271984
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Several recent studies attempt to address the biological implausibility of
the well-known backpropagation (BP) method. While promising methods such as
feedback alignment, direct feedback alignment, and their variants like
sign-concordant feedback alignment tackle BP's weight transport problem, their
validity remains controversial owing to a set of other unsolved issues. In this
work, we answer the question of whether it is possible to realize random
backpropagation solely based on mechanisms observed in neuroscience. We propose
a hypothetical framework consisting of a new microcircuit architecture and its
supporting Hebbian learning rules. Comprising three types of cells and two
types of synaptic connectivity, the proposed microcircuit architecture computes
and propagates error signals through local feedback connections and supports
the training of multi-layered spiking neural networks with a globally defined
spiking error function. We employ the Hebbian rule operating in local
compartments to update synaptic weights and achieve supervised learning in a
biologically plausible manner. Finally, we interpret the proposed framework
from an optimization point of view and show its equivalence to sign-concordant
feedback alignment. The proposed framework is benchmarked on several datasets
including MNIST and CIFAR10, demonstrating promising BP-comparable accuracy.
Related papers
- Correlative Information Maximization: A Biologically Plausible Approach
to Supervised Deep Neural Networks without Weight Symmetry [43.584567991256925]
We propose a new normative approach to describe the signal propagation in biological neural networks in both forward and backward directions.
This framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm.
Our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths.
arXiv Detail & Related papers (2023-06-07T22:14:33Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Position-aware Structure Learning for Graph Topology-imbalance by
Relieving Under-reaching and Over-squashing [67.83086131278904]
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes.
We propose a novel position-aware graph structure learning framework named PASTEL.
Our key insight is to enhance the connectivity of nodes within the same class for more supervision information.
arXiv Detail & Related papers (2022-08-17T14:04:21Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Constrained Parameter Inference as a Principle for Learning [5.080518039966762]
We propose constrained parameter inference (COPI) as a new principle for learning.
COPI allows for the estimation of network parameters under the constraints of decorrelated neural inputs and top-down perturbations of neural states.
We show that COPI not only is more biologically plausible but also provides distinct advantages for fast learning, compared with standard backpropagation of error.
arXiv Detail & Related papers (2022-03-22T13:40:57Z) - BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
Neural Networks [4.698975219970009]
We propose a new bio-plausible learning framework consisting of two components: a new architecture, and its supporting learning rules.
Under our microcircuit architecture, we employ the Spike-Timing-Dependent-Plasticity (STDP) rule operating in local compartments to update synaptic weights.
Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules.
arXiv Detail & Related papers (2021-11-14T10:32:22Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.