Synchronization and semantization in deep spiking networks
- URL: http://arxiv.org/abs/2508.12975v1
- Date: Mon, 18 Aug 2025 14:51:58 GMT
- Title: Synchronization and semantization in deep spiking networks
- Authors: Jonas Oberste-Frielinghaus, Anno C. Kurth, Julian Göltz, Laura Kriener, Junji Ito, Mihai A. Petrovici, Sonja Grün,
- Abstract summary: Recent studies have shown how spiking networks can learn complex functionality through error-correcting plasticity, but the resulting structures and dynamics remain poorly studied.<n>We train a multi-layer spiking network, as a conceptual analog of visual hierarchy, for visual input classification using spike-time encoding.<n>After learning, we observe the development of distinct-temporal activity patterns. While input patterns are synchronous by construction, activity in early layers first spreads out over time, followed by re-convergence into sharp pulses as classes are gradually extracted.<n>The emergence of synchronicity is accompanied by the formation of increasingly distinct pathways, reflecting the gradual semantization
- Score: 0.9411751957919126
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent studies have shown how spiking networks can learn complex functionality through error-correcting plasticity, but the resulting structures and dynamics remain poorly studied. To elucidate how these models may link to observed dynamics in vivo and thus how they may ultimately explain cortical computation, we need a better understanding of their emerging patterns. We train a multi-layer spiking network, as a conceptual analog of the bottom-up visual hierarchy, for visual input classification using spike-time encoding. After learning, we observe the development of distinct spatio-temporal activity patterns. While input patterns are synchronous by construction, activity in early layers first spreads out over time, followed by re-convergence into sharp pulses as classes are gradually extracted. The emergence of synchronicity is accompanied by the formation of increasingly distinct pathways, reflecting the gradual semantization of input activity. We thus observe hierarchical networks learning spike latency codes to naturally acquire activity patterns characterized by synchronicity and separability, with pronounced excitatory pathways ascending through the layers. This provides a rigorous computational hypothesis for the experimentally observed synchronicity in the visual system as a natural consequence of deep learning in cortex.
Related papers
- Learning by Steering the Neural Dynamics: A Statistical Mechanics Perspective [0.0]
We study how neural dynamics can support fully local, distributed learning.<n>We propose a biologically plausible algorithm for supervised learning with any binary recurrent network.
arXiv Detail & Related papers (2025-10-13T22:28:34Z) - Kuramoto Orientation Diffusion Models [67.0711709825854]
Orientation-rich images, such as fingerprints and textures, often exhibit coherent angular patterns.<n>Motivated by the role of phase synchronization in biological systems, we propose a score-based generative model.<n>We implement competitive results on general image benchmarks and significantly improves generation quality on orientation-dense datasets like fingerprints and textures.
arXiv Detail & Related papers (2025-09-18T18:18:49Z) - New Evidence of the Two-Phase Learning Dynamics of Neural Networks [59.55028392232715]
We introduce an interval-wise perspective that compares network states across a time window.<n>We show that the response of the network to a perturbation exhibits a transition from chaotic to stable.<n>We also find that after this transition point the model's functional trajectory is confined to a narrow cone-shaped subset.
arXiv Detail & Related papers (2025-05-20T04:03:52Z) - Hypernym Bias: Unraveling Deep Classifier Training Dynamics through the Lens of Class Hierarchy [44.99833362998488]
We argue that the learning process in classification problems can be understood through the lens of label clustering.<n>Specifically, we observe that networks tend to distinguish higher-level (hypernym) categories in the early stages of training.<n>We introduce a novel framework to track the evolution of the feature manifold during training, revealing how the hierarchy of class relations emerges.
arXiv Detail & Related papers (2025-02-17T18:47:01Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Discovering group dynamics in coordinated time series via hierarchical recurrent switching-state models [5.250223406627639]
We seek a computationally efficient model for a collection of time series arising from multiple interacting entities (a.k.a. "agents")<n>Recent models of temporal patterns across individuals fail to incorporate explicit system-level collective behavior that can influence the trajectories of individual entities.<n>We employ a latent system-level discrete state Markov chain that provides top-down influence on latent entity-level chains which in turn govern the emission of each observed time series.
arXiv Detail & Related papers (2024-01-26T16:06:01Z) - A Waddington landscape for prototype learning in generalized Hopfield
networks [0.0]
We study the learning dynamics of Generalized Hopfield networks.
We observe a strong resemblance to the canalized, or low-dimensional, dynamics of cells as they differentiate.
arXiv Detail & Related papers (2023-12-04T21:28:14Z) - Latent Traversals in Generative Models as Potential Flows [113.4232528843775]
We propose to model latent structures with a learned dynamic potential landscape.
Inspired by physics, optimal transport, and neuroscience, these potential landscapes are learned as physically realistic partial differential equations.
Our method achieves both more qualitatively and quantitatively disentangled trajectories than state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T15:53:45Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Dynamical Equations With Bottom-up Self-Organizing Properties Learn
Accurate Dynamical Hierarchies Without Any Loss Function [15.122944754472435]
We propose a learning system where patterns are defined within the realm of nonlinear dynamics with positive and negative feedback loops.
Experiments reveal that such a system can map temporal to spatial correlation, enabling hierarchical structures to be learned from sequential data.
arXiv Detail & Related papers (2023-02-04T10:00:14Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Causal Navigation by Continuous-time Neural Networks [108.84958284162857]
We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
arXiv Detail & Related papers (2021-06-15T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.