Biologically Plausible Learning via Bidirectional Spike-Based Distillation
- URL: http://arxiv.org/abs/2509.20284v1
- Date: Wed, 24 Sep 2025 16:17:06 GMT
- Title: Biologically Plausible Learning via Bidirectional Spike-Based Distillation
- Authors: Changze Lv, Yifei Wang, Yanxun Zhang, Yiyang Lu, Jingwen Xu, Di Yu, Xin Du, Xuanjing Huang, Xiaoqing Zheng,
- Abstract summary: We introduce Bidirectional Spike-based Distillation (BSD), a novel learning algorithm that jointly trains a feedforward and a backward spiking network.<n>BSD achieves performance comparable to networks trained with classical error backpropagation.<n>These findings represent a significant step toward biologically grounded, spike-driven learning in neural networks.
- Score: 47.74332895886508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing biologically plausible learning algorithms that can achieve performance comparable to error backpropagation remains a longstanding challenge. Existing approaches often compromise biological plausibility by entirely avoiding the use of spikes for error propagation or relying on both positive and negative learning signals, while the question of how spikes can represent negative values remains unresolved. To address these limitations, we introduce Bidirectional Spike-based Distillation (BSD), a novel learning algorithm that jointly trains a feedforward and a backward spiking network. We formulate learning as a transformation between two spiking representations (i.e., stimulus encoding and concept encoding) so that the feedforward network implements perception and decision-making by mapping stimuli to actions, while the backward network supports memory recall by reconstructing stimuli from concept representations. Extensive experiments on diverse benchmarks, including image recognition, image generation, and sequential regression, show that BSD achieves performance comparable to networks trained with classical error backpropagation. These findings represent a significant step toward biologically grounded, spike-driven learning in neural networks.
Related papers
- Concept-Guided Interpretability via Neural Chunking [64.6429903327095]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract recurring chunks on a neural population level.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - A Basic Evaluation of Neural Networks Trained with the Error Diffusion Learning Algorithm [0.0]
Kaneko's Error Diffusion Learning Algorithm (EDLA)<n>A single global error signal diffuses throughout a network composed of paired excitatory-inhibitory sublayers.<n>Experiments indicate that EDLA networks can consistently achieve high accuracy.
arXiv Detail & Related papers (2025-04-21T02:41:17Z) - Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection [53.82376573677766]
Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition.<n>We propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework.
arXiv Detail & Related papers (2024-12-11T15:24:07Z) - Recurrent Joint Embedding Predictive Architecture with Recurrent Forward Propagation Learning [0.0]
We introduce a vision network inspired by biological principles.<n>The network learns by predicting the representation of the next image patch (fixation) based on the sequence of past fixations.<n>We also introduce emphRecurrent-Forward propagation, a learning algorithm that avoids biologically unrealistic backpropagation through time or memory-inefficient real-time recurrent learning.
arXiv Detail & Related papers (2024-11-10T01:40:42Z) - A Robust Backpropagation-Free Framework for Images [47.97322346441165]
We present an error kernel driven activation alignment algorithm for image data.
EKDAA accomplishes through the introduction of locally derived error transmission kernels and error maps.
Results are presented for an EKDAA trained CNN that employs a non-differentiable activation function.
arXiv Detail & Related papers (2022-06-03T21:14:10Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - An error-propagation spiking neural network compatible with neuromorphic
processors [2.432141667343098]
We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
arXiv Detail & Related papers (2021-04-12T07:21:08Z) - A More Biologically Plausible Local Learning Rule for ANNs [6.85316573653194]
The proposed learning rule is derived from the concepts of spike timing dependant plasticity and neuronal association.
A preliminary evaluation done on the binary classification of MNIST and IRIS datasets shows comparable performance with backpropagation.
The local nature of learning gives a possibility of large scale distributed and parallel learning in the network.
arXiv Detail & Related papers (2020-11-24T10:35:47Z) - Biological credit assignment through dynamic inversion of feedforward
networks [11.345796608258434]
We show that feedforward network transformations can be effectively inverted through dynamics.
We map these dynamics onto generic feedforward networks, and show that the resulting algorithm performs well on supervised and unsupervised datasets.
arXiv Detail & Related papers (2020-07-10T00:03:01Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.