Drift-Diffusion Matching: Embedding dynamics in latent manifolds of asymmetric neural networks
- URL: http://arxiv.org/abs/2602.14885v1
- Date: Mon, 16 Feb 2026 16:15:59 GMT
- Title: Drift-Diffusion Matching: Embedding dynamics in latent manifolds of asymmetric neural networks
- Authors: Ramón Nartallo-Kaluarachchi, Renaud Lambiotte, Alain Goriely,
- Abstract summary: We introduce a general framework for training continuous-time RNNs to represent arbitrary dynamical systems within a low-dimensional latent subspace.<n>We show that RNNs can faithfully embed the drift and diffusion of a given differential equation, including nonlinear and nonequilibrium dynamics such as chaotic attractors.<n>Our results extend attractor neural network theory beyond equilibrium, showing that asymmetric neural populations can implement a broad class of dynamical computations within low-dimensional, unifying ideas from associative memory, nonequilibrium statistical mechanics, and neural computation.
- Score: 0.8793721044482612
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent neural networks (RNNs) provide a theoretical framework for understanding computation in biological neural circuits, yet classical results, such as Hopfield's model of associative memory, rely on symmetric connectivity that restricts network dynamics to gradient-like flows. In contrast, biological networks support rich time-dependent behaviour facilitated by their asymmetry. Here we introduce a general framework, which we term drift-diffusion matching, for training continuous-time RNNs to represent arbitrary stochastic dynamical systems within a low-dimensional latent subspace. Allowing asymmetric connectivity, we show that RNNs can faithfully embed the drift and diffusion of a given stochastic differential equation, including nonlinear and nonequilibrium dynamics such as chaotic attractors. As an application, we construct RNN realisations of stochastic systems that transiently explore various attractors through both input-driven switching and autonomous transitions driven by nonequilibrium currents, which we interpret as models of associative and sequential (episodic) memory. To elucidate how these dynamics are encoded in the network, we introduce decompositions of the RNN based on its asymmetric connectivity and its time-irreversibility. Our results extend attractor neural network theory beyond equilibrium, showing that asymmetric neural populations can implement a broad class of dynamical computations within low-dimensional manifolds, unifying ideas from associative memory, nonequilibrium statistical mechanics, and neural computation.
Related papers
- Mechanistic Interpretability of RNNs emulating Hidden Markov Models [2.786617687297761]
Recurrent neural networks (RNNs) provide a powerful approach in neuroscience to infer latent dynamics in neural populations.<n>We show that RNNs can replicate Hidden Markov Models emission statistics and then reverse-engineer the trained networks to uncover the mechanisms they implement.
arXiv Detail & Related papers (2025-10-29T16:42:07Z) - Neuronal Group Communication for Efficient Neural representation [85.36421257648294]
This paper addresses the question of how to build large neural systems that learn efficient, modular, and interpretable representations.<n>We propose Neuronal Group Communication (NGC), a theory-driven framework that reimagines a neural network as a dynamical system of interacting neuronal groups.<n>NGC treats weights as transient interactions between embedding-like neuronal states, with neural computation unfolding through iterative communication among groups of neurons.
arXiv Detail & Related papers (2025-10-19T14:23:35Z) - Dynamical Learning in Deep Asymmetric Recurrent Neural Networks [1.3421746809394772]
We show that asymmetric deep recurrent neural networks give rise to an exponentially large, dense accessible manifold of internal representations.<n>We propose a distributed learning scheme in which input-output associations emerge naturally from the recurrent dynamics.
arXiv Detail & Related papers (2025-09-05T12:05:09Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Recurrent convolutional neural networks for modeling non-adiabatic dynamics of quantum-classical systems [1.23088383881821]
We present a RNN model based on convolution neural networks for modeling the non-adiabatic dynamics of hybrid quantum-classical systems.<n>We demonstrate that the PARC-CNN architecture can effectively learn the statistical climate of the Holstein model under deep-quench conditions.
arXiv Detail & Related papers (2024-12-09T16:23:25Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Identifying Equivalent Training Dynamics [3.793387630509845]
We develop a framework for identifying conjugate and non-conjugate training dynamics.
By leveraging advances in Koopman operator theory, we demonstrate that comparing Koopman eigenvalues can correctly identify a known equivalence between online mirror descent and online gradient descent.
We then utilize our approach to: (a) identify non-conjugate training dynamics between shallow and wide fully connected neural networks; (b) characterize the early phase of training dynamics in convolutional neural networks; (c) uncover non-conjugate training dynamics in Transformers that do and do not undergo grokking.
arXiv Detail & Related papers (2023-02-17T22:15:20Z) - Interrelation of equivariant Gaussian processes and convolutional neural
networks [77.34726150561087]
Currently there exists rather promising new trend in machine leaning (ML) based on the relationship between neural networks (NN) and Gaussian processes (GP)
In this work we establish a relationship between the many-channel limit for CNNs equivariant with respect to two-dimensional Euclidean group with vector-valued neuron activations and the corresponding independently introduced equivariant Gaussian processes (GP)
arXiv Detail & Related papers (2022-09-17T17:02:35Z) - Theory of gating in recurrent neural networks [5.672132510411465]
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience.
Here, we show that gating offers flexible control of two salient features of the collective dynamics.
The gate controlling timescales leads to a novel, marginally stable state, where the network functions as a flexible integrator.
arXiv Detail & Related papers (2020-07-29T13:20:58Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.