Global stability of a Hebbian/anti-Hebbian network for principal subspace learning
- URL: http://arxiv.org/abs/2601.13170v1
- Date: Mon, 19 Jan 2026 15:53:31 GMT
- Title: Global stability of a Hebbian/anti-Hebbian network for principal subspace learning
- Authors: David Lipshutz, Robert J. Lipshutz,
- Abstract summary: How modifications at the synaptic level give to such computations at the network level remains an open question.<n>Global stability of the continuum limit of the synaptic dynamics show that the dynamics evolve in two phases.
- Score: 8.27902169448946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biological neural networks self-organize according to local synaptic modifications to produce stable computations. How modifications at the synaptic level give rise to such computations at the network level remains an open question. Pehlevan et al. [Neur. Comp. 27 (2015), 1461--1495] proposed a model of a self-organizing neural network with Hebbian and anti-Hebbian synaptic updates that implements an algorithm for principal subspace analysis; however, global stability of the nonlinear synaptic dynamics has not been established. Here, for the case that the feedforward and recurrent weights evolve at the same timescale, we prove global stability of the continuum limit of the synaptic dynamics and show that the dynamics evolve in two phases. In the first phase, the synaptic weights converge to an invariant manifold where the `neural filters' are orthonormal. In the second phase, the synaptic dynamics follow the gradient flow of a non-convex potential function whose minima correspond to neural filters that span the principal subspace of the input data.
Related papers
- On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking [49.1352577985191]
We present a comprehensive analysis of how two-layer neural networks learn features to solve the modular addition task.<n>Our work provides a full mechanistic interpretation of the learned model and a theoretical explanation of its training dynamics.
arXiv Detail & Related papers (2026-02-18T20:25:13Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Similarity Matching Networks: Hebbian Learning and Convergence Over Multiple Time Scales [5.093257685701887]
We consider and analyze the emphsimilarity matching network for principal subspace projection.<n>By leveraging a multilevel optimization framework, we prove convergence of the dynamics in the offline setting.
arXiv Detail & Related papers (2025-06-06T14:46:22Z) - New Evidence of the Two-Phase Learning Dynamics of Neural Networks [59.55028392232715]
We introduce an interval-wise perspective that compares network states across a time window.<n>We show that the response of the network to a perturbation exhibits a transition from chaotic to stable.<n>We also find that after this transition point the model's functional trajectory is confined to a narrow cone-shaped subset.
arXiv Detail & Related papers (2025-05-20T04:03:52Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.<n>In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Theory of coupled neuronal-synaptic dynamics [3.626013617212667]
In neural circuits, synaptic strengths influence neuronal activity by shaping network dynamics.
We study a recurrent-network model in which neuronal units and synaptic couplings are interacting dynamic variables.
We show that adding Hebbian plasticity slows activity in chaotic networks and can induce chaos.
arXiv Detail & Related papers (2023-02-17T16:42:59Z) - Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with
Linear Convergence Rates [7.094295642076582]
Mean-field regime is a theoretically attractive alternative to the NTK (lazy training) regime.
We establish a new linear convergence result for two-layer neural networks trained by continuous-time noisy descent in the mean-field regime.
arXiv Detail & Related papers (2022-05-19T21:05:40Z) - Stability Analysis of Fractional Order Memristor Synapse-coupled
Hopfield Neural Network with Ring Structure [0.0]
We first present a fractional-order memristor synapse-coupling Hopfield neural network on two neurons.
We extend the model to a neural network with a ring structure that consists of n sub-network neurons, increasing the synchronization in the network.
In the n-neuron case, it is revealed that the stability depends on the structure and number of sub-networks.
arXiv Detail & Related papers (2021-09-29T12:33:23Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.