PC-SNN: Predictive Coding-based Local Hebbian Plasticity Learning in Spiking Neural Networks
- URL: http://arxiv.org/abs/2211.15386v2
- Date: Tue, 16 Sep 2025 03:30:29 GMT
- Title: PC-SNN: Predictive Coding-based Local Hebbian Plasticity Learning in Spiking Neural Networks
- Authors: Haidong Wang, Xiaogang Xiong, Mengting Lan, Yinghao Chu, Zixuan Jiang, KC Santosh, Shimin Wang, Renxin Zhong,
- Abstract summary: Spiking Neural Networks (SNNs) emulate the brain's information processing with unparalleled biological plausibility.<n>We propose PC-SNN, a novel learning framework that integrates predictive coding with SNNs to enable biologically plausible, local Hebbian plasticity.
- Score: 9.026880521552153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs), regarded as the third generation of neural networks, emulate the brain's information processing with unparalleled biological plausibility compared to traditional neural networks. However, their non-linear, event-driven dynamics pose significant challenges for training, and existing methods often deviate from neuroscientific principles of cortical learning. Drawing inspiration from predictive coding theory-a leading model of brain information processing-we propose PC-SNN, a novel learning framework that integrates predictive coding with SNNs to enable biologically plausible, local Hebbian plasticity without reliance on backpropagation. Unlike conventional SNN training approaches, PC-SNN leverages only local computations, aligning with the brain's distributed processing and overcoming the biological implausibility of global error propagation. Our classification model achieves competitive performance on the benchmark datasets, including Caltech Face/Motorbike, MNIST, and CIFAR10, surpassing state-of-the-art multi-layer SNNs. Furthermore, our predictive coding-based regression model outperforms backpropagation-based methods while adhering to local plasticity constraints, offering a scalable and biologically grounded alternative for SNN training. PC-SNN drives progress in neuromorphic computing through validating the adaptability of bio-inspired algorithms within spiking neural architectures, but also unveils novel understandings of neurocognitive learning processes, presenting a conceptual framework distinguished by its theoretical originality and functional efficacy.
Related papers
- General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks [13.913208605378854]
Spiking neural networks (SNNs) are expected to bridge the gap between artificial intelligence and computational neuroscience.<n>We introduce a new SNN paradigm, named Cognition-aware SNN (CogniSNN), by incorporating Random Graph Architecture (RGA)
arXiv Detail & Related papers (2025-12-12T17:36:31Z) - Learning Internal Biological Neuron Parameters and Complexity-Based Encoding for Improved Spiking Neural Networks Performance [0.0]
This study introduces a novel approach by replacing the traditional perceptron model with a biologically inspired probabilistic meta neuron model.<n>As a second key contribution, we present a new biologically inspired classification framework that uniquely integrates SNNs with Le-Ziv plasticity (LZC)<n>We consider learning algorithms such as backpropagation, spike-timing aspect-dependent plasticity (STDP), and the Tempotron learning rule.
arXiv Detail & Related papers (2025-08-08T09:14:49Z) - CogniSNN: A First Exploration to Random Graph Architecture based Spiking Neural Networks with Enhanced Expandability and Neuroplasticity [8.24896024250985]
This paper develops a new modeling paradigm for spiking neural networks (SNNs) with random graph architecture (RGA)<n>We improve the expandability and neuroplasticity of CogniSNN by introducing a modified spiking residual neural node (ResNode)<n>Experiments show that CogniSNN with re-designed ResNode performs outstandingly in neuromorphic datasets with fewer parameters.
arXiv Detail & Related papers (2025-05-09T12:21:23Z) - Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Fluctuation-driven initialization for spiking neural network training [3.976291254896486]
Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain.
We develop a general strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain.
arXiv Detail & Related papers (2022-06-21T09:48:49Z) - A Synapse-Threshold Synergistic Learning Approach for Spiking Neural
Networks [1.8556712517882232]
Spiking neural networks (SNNs) have demonstrated excellent capabilities in various intelligent scenarios.
In this study, we develop a novel synergistic learning approach that involves simultaneously training synaptic weights and spike thresholds in SNNs.
arXiv Detail & Related papers (2022-06-10T06:41:36Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Advancing Residual Learning towards Powerful Deep Spiking Neural
Networks [16.559670769601038]
Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks.
MS-ResNet is able to significantly extend the depth of directly trained SNNs.
MS-ResNet 104 achieves 76.02% accuracy on ImageNet, the first time in the domain of directly trained SNNs.
arXiv Detail & Related papers (2021-12-15T05:47:21Z) - BioGrad: Biologically Plausible Gradient-Based Learning for Spiking
Neural Networks [0.0]
Spiking neural networks (SNN) are delivering energy-efficient, massively parallel, and low-latency solutions to AI problems.
To harness these computational benefits, SNN need to be trained by learning algorithms that adhere to brain-inspired neuromorphic principles.
We propose a biologically plausible gradient-based learning algorithm for SNN that is functionally equivalent to backprop.
arXiv Detail & Related papers (2021-10-27T00:07:25Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Convolutional Spiking Neural Networks for Spatio-Temporal Feature
Extraction [3.9898522485253256]
Spiking neural networks (SNNs) can be used in low-power and embedded systems.
temporal coding in layers of convolutional neural networks and other types of SNNs has yet to be studied.
We present a new deep spiking architecture to tackle real-world problems.
arXiv Detail & Related papers (2020-03-27T11:58:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.