Revisiting Direct Encoding: Learnable Temporal Dynamics for Static Image Spiking Neural Networks
- URL: http://arxiv.org/abs/2512.01687v1
- Date: Mon, 01 Dec 2025 13:55:00 GMT
- Title: Revisiting Direct Encoding: Learnable Temporal Dynamics for Static Image Spiking Neural Networks
- Authors: Huaxu He,
- Abstract summary: Handling static images that lack inherent temporal dynamics is a fundamental challenge for spiking neural networks (SNNs)<n>In directly trained SNNs, static inputs are typically repeated across time steps, causing the temporal dimension to collapse into a rate like representation and preventing meaningful temporal modeling.<n>This work revisits the reported performance gap between direct and rate based encodings and shows that it primarily stems from convolutional learnability and surrogate gradient formulations rather than the encoding schemes themselves.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Handling static images that lack inherent temporal dynamics remains a fundamental challenge for spiking neural networks (SNNs). In directly trained SNNs, static inputs are typically repeated across time steps, causing the temporal dimension to collapse into a rate like representation and preventing meaningful temporal modeling. This work revisits the reported performance gap between direct and rate based encodings and shows that it primarily stems from convolutional learnability and surrogate gradient formulations rather than the encoding schemes themselves. To illustrate this mechanism level clarification, we introduce a minimal learnable temporal encoding that adds adaptive phase shifts to induce meaningful temporal variation from static inputs.
Related papers
- SpikingGamma: Surrogate-Gradient Free and Temporally Precise Online Training of Spiking Neural Networks with Smoothed Delays [1.5166105038254163]
Spiking Neural Networks (SNNs) promise energy-efficient, low-latency AI through sparse, event-driven computation.<n>Yet, training SNNs under fine temporal discretization remains a major challenge, hindering both low-latency responsiveness and the mapping of software-trained SNNs to efficient hardware.<n>We show that this SpikingGamma model supports direct error backpropagation without surrogate gradients, can learn fine temporal patterns with minimal spiking in an online manner, and scale feedforward SNNs to complex tasks and benchmarks with competitive accuracy.
arXiv Detail & Related papers (2026-02-02T11:35:16Z) - Adaptive Visual Autoregressive Acceleration via Dual-Linkage Entropy Analysis [50.48301331112126]
We propose NOVA, a training-free token reduction acceleration framework for Visual AutoRegressive modeling.<n>NOVA adaptively determines the acceleration activation scale during inference by online identifying the inflection point of scale entropy growth.<n>Experiments and analyses validate NOVA as a simple yet effective training-free acceleration framework.
arXiv Detail & Related papers (2026-02-01T17:29:42Z) - ChronoPlastic Spiking Neural Networks [0.0]
Spiking neural networks (SNNs) offer a biologically grounded and energy-efficient alternative to conventional neural architectures.<n>CPSNNs embed temporal control directly within local synaptic dynamics.<n>CPSNNs learn long-gap temporal dependencies significantly faster and more reliably than standard SNN baselines.
arXiv Detail & Related papers (2025-12-17T06:58:04Z) - Unleashing Temporal Capacity of Spiking Neural Networks through Spatiotemporal Separation [67.69345363409835]
Spiking Neural Networks (SNNs) are considered naturally suited for temporal processing, with membrane potential propagation widely regarded as the core temporal modeling mechanism.<n>We design Non-Stateful (NS) models progressively removing membrane propagation to its stage-wise role. Experiments reveal a counterintuitive phenomenon: moderate removal in shallow layers improves performance, while excessive removal causes collapse.
arXiv Detail & Related papers (2025-12-05T07:05:53Z) - Learning Time in Static Classifiers [44.358377952850994]
We propose a simple yet effective framework that equips standard feedforward classifiers with temporal reasoning.<n>We use a novel Support-Exemplar-Query (SEQ) learning paradigm, which structures training data into temporally coherent trajectories.<n>Our approach bridges static and temporal learning in a modular and data-efficient manner, requiring only a simple on top of pre-extracted features.
arXiv Detail & Related papers (2025-11-15T18:42:51Z) - Stabilizing Direct Training of Spiking Neural Networks: Membrane Potential Initialization and Threshold-robust Surrogate Gradient [11.229584148105113]
Spiking Neural Networks (SNNs) have demonstrated high-quality outputs even at early timesteps.<n>In this paper, we present two key innovations: MP-Init (Membrane Potential Initialization) and TrSG (Threshold-robust Surrogate Gradient)
arXiv Detail & Related papers (2025-11-11T19:15:30Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Spline Deformation Field [21.755382164519776]
inductive biases can hinder canonical spatial coherence in ill-posed scenarios.<n>We introduce a novel low-rank spatial encoding, replacing conventional coupled techniques.<n>It achieves competitive dynamic reconstruction quality compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-07-10T08:11:46Z) - StPR: Spatiotemporal Preservation and Routing for Exemplar-Free Video Class-Incremental Learning [79.44594332189018]
Class-Incremental Learning (CIL) seeks to develop models that continuously learn new action categories over time without previously acquired knowledge.<n>Existing approaches either rely on forgetting, raising concerns over memory and privacy, or adapt static image-based methods that neglect temporal modeling.<n>We propose a unified and exemplar-free VCIL framework that explicitly disentangles and preserves information.
arXiv Detail & Related papers (2025-05-20T06:46:51Z) - Optimizing Spatio-Temporal Information Processing in Spiking Neural Networks via Unconstrained Leaky Integrate-and-Fire Neurons and Hybrid Coding [0.0]
Spiking Neural Networks (SNN) exhibit higher energy efficiency compared to Artificial Neural Networks (ANN)<n>SNN possess a crucial characteristic namely the ability to process temporal information.<n>This paper proposes an Unconstrained Integrate-and-Fire (ULIF) neuronal model that allows for different time steps.
arXiv Detail & Related papers (2024-08-22T13:58:35Z) - Enhancing Dynamic CT Image Reconstruction with Neural Fields and Optical Flow [0.0]
We show the benefits of introducing explicit motion regularizers for dynamic inverse problems based on partial differential equations.<n>We also compare neural fields against a grid-based solver and show that the former outperforms the latter in terms of PSNR.
arXiv Detail & Related papers (2024-06-03T13:07:29Z) - Detecting Anomalies in Dynamic Graphs via Memory enhanced Normality [39.476378833827184]
Anomaly detection in dynamic graphs presents a significant challenge due to the temporal evolution of graph structures and attributes.
We introduce a novel spatial- temporal memories-enhanced graph autoencoder (STRIPE)
STRIPE significantly outperforms existing methods with 5.8% improvement in AUC scores and 4.62X faster in training time.
arXiv Detail & Related papers (2024-03-14T02:26:10Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.