Synchrony-Gated Plasticity with Dopamine Modulation for Spiking Neural Networks
- URL: http://arxiv.org/abs/2512.07194v1
- Date: Mon, 08 Dec 2025 06:10:44 GMT
- Title: Synchrony-Gated Plasticity with Dopamine Modulation for Spiking Neural Networks
- Authors: Yuchen Tian, Samuel Tensingh, Jason Eshraghian, Nhan Duy Truong, Omid Kavehei,
- Abstract summary: Dopamine-Modulated Spike-Synchrony-Dependent Plasticity (DA-SSDP) is a synchrony-based rule that is sensitive to loss.<n>DA-SSDP condenses spike patterns into a synchrony metric at the batch level.
- Score: 6.085945372100414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While surrogate backpropagation proves useful for training deep spiking neural networks (SNNs), incorporating biologically inspired local signals on a large scale remains challenging. This difficulty stems primarily from the high memory demands of maintaining accurate spike-timing logs and the potential for purely local plasticity adjustments to clash with the supervised learning goal. To effectively leverage local signals derived from spiking neuron dynamics, we introduce Dopamine-Modulated Spike-Synchrony-Dependent Plasticity (DA-SSDP), a synchrony-based rule that is sensitive to loss and brings a synchrony-based local learning signal to the model. DA-SSDP condenses spike patterns into a synchrony metric at the batch level. An initial brief warm-up phase assesses its relationship to the task loss and sets a fixed gate that subsequently adjusts the local update's magnitude. In cases where synchrony proves unrelated to the task, the gate settles at one, simplifying DA-SSDP to a basic two-factor synchrony mechanism that delivers minor weight adjustments driven by concurrent spike firing and a Gaussian latency function. These small weight updates are only added to the network`s deeper layers following the backpropagation phase, and our tests showed this simplified version did not degrade performance and sometimes gave a small accuracy boost, serving as a regularizer during training. The rule stores only binary spike indicators and first-spike latencies with a Gaussian kernel. Without altering the model structure or optimization routine, evaluations on benchmarks like CIFAR-10 (+0.42\%), CIFAR-100 (+0.99\%), CIFAR10-DVS (+0.1\%), and ImageNet-1K (+0.73\%) demonstrated consistent accuracy gains, accompanied by a minor increase in computational overhead. Our code is available at https://github.com/NeuroSyd/DA-SSDP.
Related papers
- When to restart? Exploring escalating restarts on convergence [0.06524460254566904]
We propose a simple yet effective strategy, which we call Descent with Escalating Restarts (SGD-ER)<n>Our method monitors training progress and triggers restarts when stagnation is detected, linearly escalating the learning rate to escape sharp local minima.<n>Compared to standard schedulers, SGD-ER improves test accuracy by 0.5-4.5%, demonstrating the benefit of convergence-aware escalating restarts for better local optima.
arXiv Detail & Related papers (2026-03-04T14:35:27Z) - Generalizing GNNs with Tokenized Mixture of Experts [75.8310720413187]
We show that improving stability requires reducing reliance on shift-sensitive features, leaving an irreducible worst-case generalization floor.<n>We propose STEM-GNN, a pretrain-then-finetune framework with a mixture-of-experts encoder for diverse computation paths.<n>Across nine node, link, and graph benchmarks, STEM-GNN achieves a stronger three-way balance, improving robustness to degree/homophily shifts and to feature/edge corruptions while remaining competitive on clean graphs.
arXiv Detail & Related papers (2026-02-09T22:48:30Z) - Time Is All It Takes: Spike-Retiming Attacks on Event-Driven Spiking Neural Networks [87.16809558673403]
Spiking neural networks (SNNs) compute with discrete spikes and exploit temporal structure.<n>We study a timing-only adversary that retimes existing spikes while preserving spike counts and amplitudes in event-driven SNNs.
arXiv Detail & Related papers (2026-02-03T09:06:53Z) - DelRec: learning delays in recurrent spiking neural networks [44.373421535679476]
Spiking neural networks (SNNs) are a bio-inspired alternative to conventional real-valued deep learning models.<n>DelRec is the first SGL-based method to train axonal or synaptic delays in recurrent spiking layers.<n>Our results demonstrate that recurrent delays are critical for temporal processing in SNNs.
arXiv Detail & Related papers (2025-09-29T14:38:57Z) - Network-Optimised Spiking Neural Network for Event-Driven Networking [2.5941336499463383]
Spiking neural networks offer event-driven computation suited to time-critical networking tasks such as anomaly detection, local routing control, and congestion management at the edge.<n>We introduce Network-Optimised Spiking (NOS), a compact two-variable unit whose state encodes normalised queue occupancy and a recovery resource.<n>We provide guidance for data-driven initialisation, surrogate-gradient training with a homotopy on reset sharpness, and explicit stability checks with topology of bounds for resource constrained deployments.
arXiv Detail & Related papers (2025-09-27T22:31:24Z) - Efficient Memristive Spiking Neural Networks Architecture with Supervised In-Situ STDP Method [0.0]
Memristor-based Spiking Neural Networks (SNNs) with temporal spike encoding enable ultra-low-energy computation.<n>This paper presents a circuit-level memristive spiking neural network (SNN) architecture trained using a proposed novel supervised in-situ learning algorithm.
arXiv Detail & Related papers (2025-07-28T17:09:48Z) - NeRF-based CBCT Reconstruction needs Normalization and Initialization [53.58395475423445]
NeRF-based methods suffer from a local-global training mismatch between their two key components: the hash encoder and the neural network.<n>We introduce a Normalized Hash, which enhances feature consistency and mitigates the mismatch.<n>The neural network exhibits improved stability during early training, enabling faster convergence and enhanced reconstruction performance.
arXiv Detail & Related papers (2025-06-24T16:01:45Z) - Learning with Spike Synchrony in Spiking Neural Networks [3.8506283985103447]
Spiking neural networks (SNNs) promise energy-efficient computation by mimicking biological neural dynamics.<n>We introduce spike-synchrony-dependent plasticity (SSDP), a training approach that adjusts synaptic weights based on the degree of neural firing rather than spike timing.
arXiv Detail & Related papers (2025-04-14T04:01:40Z) - Efficiently Training Time-to-First-Spike Spiking Neural Networks from Scratch [39.05124192217359]
Spiking Neural Networks (SNNs) are well-suited for energy-efficient neuromorphic hardware.<n>Time-to-First-Spike (TTFS) coding, which uses a single spike per neuron, offers extreme sparsity and energy efficiency but suffers from unstable training and low accuracy due to its sparse firing.<n>We propose a training framework incorporating parameter normalization, training normalization, temporal output decoding, and pooling layer re-evaluation.<n>Experiments show the framework stabilizes and accelerates training, reduces latency, and achieves state-of-the-art accuracy for TTFS SNNs on M
arXiv Detail & Related papers (2024-10-31T04:14:47Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Rapid training of deep neural networks without skip connections or
normalization layers using Deep Kernel Shaping [46.083745557823164]
We identify the main pathologies present in deep networks that prevent them from training fast and generalizing to unseen data.
We show how these can be avoided by carefully controlling the "shape" of the network's kernel function.
arXiv Detail & Related papers (2021-10-05T00:49:36Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z) - Supervised Learning in Temporally-Coded Spiking Neural Networks with
Approximate Backpropagation [0.021506382989223777]
We propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification.
The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive.
In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.
arXiv Detail & Related papers (2020-07-27T03:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.