Spectrum of non-Hermitian deep-Hebbian neural networks
- URL: http://arxiv.org/abs/2208.11411v1
- Date: Wed, 24 Aug 2022 10:09:47 GMT
- Title: Spectrum of non-Hermitian deep-Hebbian neural networks
- Authors: Zijian Jiang and Ziming Chen and Tianqi Hou and Haiping Huang
- Abstract summary: We integrate the experimental observation of wide synaptic integration window into our model of sequence retrieval in the continuous time dynamics.
Our work provides a systematic study of time-lagged correlations with arbitrary time delays, and thus can inspire future studies of a broad class of memory models.
- Score: 3.333967282951668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks with recurrent asymmetric couplings are important to
understand how episodic memories are encoded in the brain. Here, we integrate
the experimental observation of wide synaptic integration window into our model
of sequence retrieval in the continuous time dynamics. The model with
non-normal neuron-interactions is theoretically studied by deriving a random
matrix theory of the Jacobian matrix in neural dynamics. The spectra bears
several distinct features, such as breaking rotational symmetry about the
origin, and the emergence of nested voids within the spectrum boundary. The
spectral density is thus highly non-uniformly distributed in the complex plane.
The random matrix theory also predicts a transition to chaos. In particular,
the edge of chaos provides computational benefits for the sequential retrieval
of memories. Our work provides a systematic study of time-lagged correlations
with arbitrary time delays, and thus can inspire future studies of a broad
class of memory models, and even big data analysis of biological time series.
Related papers
- Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning [52.26396748560348]
We provide an overview of high dimensional dynamical systems driven by random matrices.<n>We focus on applications to simple models of learning and generalization in machine learning theory.
arXiv Detail & Related papers (2026-01-03T00:12:32Z) - Random matrix theory of sparse neuronal networks with heterogeneous timescales [0.6181093777643575]
Training recurrent neuronal networks consists of excitatory (E) and inhibitory (I) units with additive noise for working memory computation.<n>Here, we investigate the dynamics near these equilibria and show that they are sparse, non-Hermitian rectangular-block matrices modified by heterogeneous synaptic decay timescales and activation-function gains.<n>An analytic description of the spectral edge is obtained, relating statistical parameters of the Jacobians to near-critical features of the equilibria essential for robust working memory computation.
arXiv Detail & Related papers (2025-12-14T17:02:22Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Long Sequence Hopfield Memory [32.28395813801847]
Sequence memory enables agents to encode, store, and retrieve complex sequences of stimuli and actions.
We introduce a nonlinear interaction term, enhancing separation between the patterns.
We extend this model to store sequences with variable timing between states' transitions.
arXiv Detail & Related papers (2023-06-07T15:41:03Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Deep learning delay coordinate dynamics for chaotic attractors from
partial observable data [0.0]
We utilize deep artificial neural networks to learn discrete discrete time maps and continuous time flows of the partial state.
We demonstrate the capacity of deep ANNs to predict chaotic behavior from a scalar observation on a manifold of dimension three via the Lorenz system.
arXiv Detail & Related papers (2022-11-20T19:25:02Z) - Theoretical analysis of deep neural networks for temporally dependent
observations [1.6752182911522522]
We study theoretical properties of deep neural networks on modeling non-linear time series data.
Results are supported via various numerical simulation settings as well as an application to a macroeconomic data set.
arXiv Detail & Related papers (2022-10-20T18:56:37Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - Chaos and Ergodicity in Extended Quantum Systems with Noisy Driving [0.0]
We study the time evolution operator in a family of local quantum circuits with random fields in a fixed direction.
We show that for the systems under consideration the generalised spectral form factor can be expressed in terms of dynamical correlation functions.
This also provides a connection between the many-body Thouless time $tau_rm th$ -- the time at which the generalised spectral form factor starts following the random matrix theory prediction -- and the conservation laws of the system.
arXiv Detail & Related papers (2020-10-23T15:54:55Z) - Time-Reversal Symmetric ODE Network [138.02741983098454]
Time-reversal symmetry is a fundamental property that frequently holds in classical and quantum mechanics.
We propose a novel loss function that measures how well our ordinary differential equation (ODE) networks comply with this time-reversal symmetry.
We show that, even for systems that do not possess the full time-reversal symmetry, TRS-ODENs can achieve better predictive performances over baselines.
arXiv Detail & Related papers (2020-07-22T12:19:40Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.