Spike Agreement Dependent Plasticity: A scalable Bio-Inspired learning paradigm for Spiking Neural Networks
- URL: http://arxiv.org/abs/2508.16216v1
- Date: Fri, 22 Aug 2025 08:40:42 GMT
- Title: Spike Agreement Dependent Plasticity: A scalable Bio-Inspired learning paradigm for Spiking Neural Networks
- Authors: Saptarshi Bej, Muhammed Sahad E, Gouri Lakshmi, Harshit Kumar, Pritam Kar, Bikas C Das,
- Abstract summary: We introduce Spike Agreement Dependent Plasticity (SADP), a biologically inspired synaptic learning rule for Spiking Neural Networks (SNNs)<n>SADP relies on the agreement between pre- and post-synaptic spike trains rather than precise spike-pair timing.<n>Our framework bridges the gap between biological plausibility and computational scalability, offering a viable learning mechanism for neuromorphic systems.
- Score: 3.9214831838299595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Spike Agreement Dependent Plasticity (SADP), a biologically inspired synaptic learning rule for Spiking Neural Networks (SNNs) that relies on the agreement between pre- and post-synaptic spike trains rather than precise spike-pair timing. SADP generalizes classical Spike-Timing-Dependent Plasticity (STDP) by replacing pairwise temporal updates with population-level correlation metrics such as Cohen's kappa. The SADP update rule admits linear-time complexity and supports efficient hardware implementation via bitwise logic. Empirical results on MNIST and Fashion-MNIST show that SADP, especially when equipped with spline-based kernels derived from our experimental iontronic organic memtransistor device data, outperforms classical STDP in both accuracy and runtime. Our framework bridges the gap between biological plausibility and computational scalability, offering a viable learning mechanism for neuromorphic systems.
Related papers
- Supervised Spike Agreement Dependent Plasticity for Fast Local Learning in Spiking Neural Networks [6.376927936764407]
We introduce a supervised extension of Spike Agreement-Dependent Plasticity (SADP)<n>SADP replaces pairwise spike-timing comparisons with population-level agreement metrics such as Cohen's kappa.<n>Experiments on MNIST, Fashion-MNIST, CIFAR-10, and biomedical image classification tasks demonstrate competitive performance and fast convergence.
arXiv Detail & Related papers (2026-01-13T13:09:34Z) - ChronoPlastic Spiking Neural Networks [0.0]
Spiking neural networks (SNNs) offer a biologically grounded and energy-efficient alternative to conventional neural architectures.<n>CPSNNs embed temporal control directly within local synaptic dynamics.<n>CPSNNs learn long-gap temporal dependencies significantly faster and more reliably than standard SNN baselines.
arXiv Detail & Related papers (2025-12-17T06:58:04Z) - Self-Motivated Growing Neural Network for Adaptive Architecture via Local Structural Plasticity [0.2578242050187029]
Control policies in deep reinforcement learning are often implemented with fixed-capacity multilayer perceptrons trained by backpropagation.<n>This paper introduces the Self-Motivated Growing Neural Network (SMGrNN), a controller whose topology evolves online through a local Structural Plasticity Module.
arXiv Detail & Related papers (2025-12-14T14:31:21Z) - Extending Spike-Timing Dependent Plasticity to Learning Synaptic Delays [50.45313162890861]
We introduce a novel learning rule for simultaneously learning synaptic connection strengths and delays.<n>We validate our approach by extending a widely-used SNN model for classification trained with unsupervised learning.<n>Results demonstrate that our proposed method consistently achieves superior performance across a variety of test scenarios.
arXiv Detail & Related papers (2025-06-17T21:24:58Z) - Beyond Pairwise Plasticity: Group-Level Spike Synchrony Facilitates Efficient Learning in Spiking Neural Networks [4.8343143600505245]
We introduce a spike-synchrony-dependent plasticity (SSDP) rule that adjusts synaptic weights based on the degree of coordinated firing among neurons.<n>STDP supports stable and scalable learning by encouraging neurons to form coherent activity patterns.<n>STDP operates in a fully event-driven manner and incurs minimal computational cost.
arXiv Detail & Related papers (2025-04-14T04:01:40Z) - Learning Delays Through Gradients and Structure: Emergence of Spatiotemporal Patterns in Spiking Neural Networks [0.06752396542927405]
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays through two approaches.
In the latter approach, the network selects and prunes connections, optimizing the delays in sparse connectivity settings.
Our results demonstrate the potential of combining delay learning with dynamic pruning to develop efficient SNN models for temporal data processing.
arXiv Detail & Related papers (2024-07-07T11:55:48Z) - Neuromorphic Online Learning for Spatiotemporal Patterns with a
Forward-only Timeline [5.094970748243019]
Spiking neural networks (SNNs) are bio-plausible computing models with high energy efficiency.
Backpropagation Through Time (BPTT) is traditionally used to train SNNs.
We present Spatiotemporal Online Learning for Synaptic Adaptation (SOLSA), specifically designed for online learning of SNNs.
arXiv Detail & Related papers (2023-07-21T02:47:03Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - An Unsupervised STDP-based Spiking Neural Network Inspired By
Biologically Plausible Learning Rules and Connections [10.188771327458651]
Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly.
We design an adaptive synaptic filter and introduce the adaptive spiking threshold to enrich the representation ability of SNNs.
Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-07-06T14:53:32Z) - HiPPO: Recurrent Memory with Optimal Polynomial Projections [93.3537706398653]
We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto bases.
Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem.
This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale.
arXiv Detail & Related papers (2020-08-17T23:39:33Z) - Equilibrium Propagation with Continual Weight Updates [69.87491240509485]
We propose a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT)
We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT.
These results bring EP a step closer to biology by better complying with hardware constraints while maintaining its intimate link with backpropagation.
arXiv Detail & Related papers (2020-04-29T14:54:30Z) - Continual Weight Updates and Convolutional Architectures for Equilibrium
Propagation [69.87491240509485]
Equilibrium Propagation (EP) is a biologically inspired alternative algorithm to backpropagation (BP) for training neural networks.
We propose a discrete-time formulation of EP which enables to simplify equations, speed up training and extend EP to CNNs.
Our CNN model achieves the best performance ever reported on MNIST with EP.
arXiv Detail & Related papers (2020-04-29T12:14:06Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.