BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
Neural Networks
- URL: http://arxiv.org/abs/2111.13188v1
- Date: Sun, 14 Nov 2021 10:32:22 GMT
- Title: BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
Neural Networks
- Authors: Yukun Yang, Peng Li
- Abstract summary: We propose a new bio-plausible learning framework consisting of two components: a new architecture, and its supporting learning rules.
Under our microcircuit architecture, we employ the Spike-Timing-Dependent-Plasticity (STDP) rule operating in local compartments to update synaptic weights.
Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules.
- Score: 4.698975219970009
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Our brain consists of biological neurons encoding information through
accurate spike timing, yet both the architecture and learning rules of our
brain remain largely unknown. Comparing to the recent development of
backpropagation-based (BP-based) methods that are able to train spiking neural
networks (SNNs) with high accuracy, biologically plausible methods are still in
their infancy. In this work, we wish to answer the question of whether it is
possible to attain comparable accuracy of SNNs trained by BP-based rules with
bio-plausible mechanisms. We propose a new bio-plausible learning framework,
consisting of two components: a new architecture, and its supporting learning
rules. With two types of cells and four types of synaptic connections, the
proposed local microcircuit architecture can compute and propagate error
signals through local feedback connections and support training of multi-layers
SNNs with a globally defined spiking error function. Under our microcircuit
architecture, we employ the Spike-Timing-Dependent-Plasticity (STDP) rule
operating in local compartments to update synaptic weights and achieve
supervised learning in a biologically plausible manner. Finally, We interpret
the proposed framework from an optimization point of view and show the
equivalence between it and the BP-based rules under a special circumstance. Our
experiments show that the proposed framework demonstrates learning accuracy
comparable to BP-based rules and may provide new insights on how learning is
orchestrated in biological systems.
Related papers
- Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs [12.357635939839696]
We investigate the use of "gradient-estimating" evolutionary algorithms for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP.
Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP methods.
arXiv Detail & Related papers (2023-11-17T20:59:57Z) - Biologically inspired structure learning with reverse knowledge
distillation for spiking neural networks [19.33517163587031]
Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility.
The performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy.
This paper proposes an evolutionary-based structure construction method for constructing more reasonable SNNs.
arXiv Detail & Related papers (2023-04-19T08:41:17Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - A Computational Framework of Cortical Microcircuits Approximates
Sign-concordant Random Backpropagation [7.601127912271984]
We propose a hypothetical framework consisting of a new microcircuit architecture and its supporting Hebbian learning rules.
We employ the Hebbian rule operating in local compartments to update synaptic weights and achieve supervised learning in a biologically plausible manner.
The proposed framework is benchmarked on several datasets including MNIST and CIFAR10, demonstrating promising BP-comparable accuracy.
arXiv Detail & Related papers (2022-05-15T14:22:03Z) - Towards Scaling Difference Target Propagation by Learning Backprop
Targets [64.90165892557776]
Difference Target Propagation is a biologically-plausible learning algorithm with close relation with Gauss-Newton (GN) optimization.
We propose a novel feedback weight training scheme that ensures both that DTP approximates BP and that layer-wise feedback weight training can be restored.
We report the best performance ever achieved by DTP on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2022-01-31T18:20:43Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Predictive Coding Can Do Exact Backpropagation on Any Neural Network [40.51949948934705]
We generalize (IL and) Z-IL by directly defining them on computational graphs.
This is the first biologically plausible algorithm that is shown to be equivalent to BP in the way of updating parameters on any neural network.
arXiv Detail & Related papers (2021-03-08T11:52:51Z) - A More Biologically Plausible Local Learning Rule for ANNs [6.85316573653194]
The proposed learning rule is derived from the concepts of spike timing dependant plasticity and neuronal association.
A preliminary evaluation done on the binary classification of MNIST and IRIS datasets shows comparable performance with backpropagation.
The local nature of learning gives a possibility of large scale distributed and parallel learning in the network.
arXiv Detail & Related papers (2020-11-24T10:35:47Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.