Identifiable learning of dissipative dynamics
- URL: http://arxiv.org/abs/2510.24160v1
- Date: Tue, 28 Oct 2025 07:57:14 GMT
- Title: Identifiable learning of dissipative dynamics
- Authors: Aiqing Zhu, Beatrice W. Soh, Grigorios A. Pavliotis, Qianxiao Li,
- Abstract summary: We introduce I-OnsagerNet, a neural framework that learns dissipative dynamics directly from trajectories.<n>I-OnsagerNet extends the Onsager principle to guarantee that the learned potential is obtained from the stationary density.<n>Our approach enables us to calculate the entropy production and to quantify irreversibility, offering a principled way to detect and quantify deviations from equilibrium.
- Score: 25.409059056398124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex dissipative systems appear across science and engineering, from polymers and active matter to learning algorithms. These systems operate far from equilibrium, where energy dissipation and time irreversibility are key to their behavior, but are difficult to quantify from data. Learning accurate and interpretable models of such dynamics remains a major challenge: the models must be expressive enough to describe diverse processes, yet constrained enough to remain physically meaningful and mathematically identifiable. Here, we introduce I-OnsagerNet, a neural framework that learns dissipative stochastic dynamics directly from trajectories while ensuring both interpretability and uniqueness. I-OnsagerNet extends the Onsager principle to guarantee that the learned potential is obtained from the stationary density and that the drift decomposes cleanly into time-reversible and time-irreversible components, as dictated by the Helmholtz decomposition. Our approach enables us to calculate the entropy production and to quantify irreversibility, offering a principled way to detect and quantify deviations from equilibrium. Applications to polymer stretching in elongational flow and to stochastic gradient Langevin dynamics reveal new insights, including super-linear scaling of barrier heights and sub-linear scaling of entropy production rates with the strain rate, and the suppression of irreversibility with increasing batch size. I-OnsagerNet thus establishes a general, data-driven framework for discovering and interpreting non-equilibrium dynamics.
Related papers
- KoopGen: Koopman Generator Networks for Representing and Predicting Dynamical Systems with Continuous Spectra [65.11254608352982]
We introduce a generator-based neural Koopman framework that models dynamics through a structured, state-dependent representation of Koopman generators.<n>By exploiting the intrinsic Cartesian decomposition into skew-adjoint and self-adjoint components, KoopGen separates conservative transport from irreversible dissipation.
arXiv Detail & Related papers (2026-02-15T06:32:23Z) - Is memory all you need? Data-driven Mori-Zwanzig modeling of Lagrangian particle dynamics in turbulent flows [38.33325744358047]
We show how one can learn a surrogate dynamical system that is able to evolve a turbulent Lagrangian trajectory in a way that is point-wise accurate for short-time predictions.<n>This opens up a range of new applications, for example, for the control of active Lagrangian agents in turbulence.
arXiv Detail & Related papers (2025-07-21T20:50:55Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Model-free learning of probability flows: Elucidating the nonequilibrium dynamics of flocking [15.238808518078567]
High dimensionality of the phase space renders traditional computational techniques infeasible for estimating the entropy production rate.
We derive a new physical connection between the probability current and two local definitions of the EPR for inertial systems.
Our results highlight that entropy is consumed on the spatial interface of a flock as the interplay between alignment and fluctuation dynamically creates and annihilates order.
arXiv Detail & Related papers (2024-11-21T17:08:06Z) - An optimization-based equilibrium measure describes non-equilibrium steady state dynamics: application to edge of chaos [2.5690340428649328]
Understanding neural dynamics is a central topic in machine learning, non-linear physics and neuroscience.
The dynamics is non-linear, and particularly non-gradient, i.e., the driving force can not be written as gradient of a potential.
arXiv Detail & Related papers (2024-01-18T14:25:32Z) - Dynamics with autoregressive neural quantum states: application to
critical quench dynamics [41.94295877935867]
We present an alternative general scheme that enables one to capture long-time dynamics of quantum systems in a stable fashion.
We apply the scheme to time-dependent quench dynamics by investigating the Kibble-Zurek mechanism in the two-dimensional quantum Ising model.
arXiv Detail & Related papers (2022-09-07T15:50:00Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Physics-aware, probabilistic model order reduction with guaranteed
stability [0.0]
We propose a generative framework for learning an effective, lower-dimensional, coarse-grained dynamical model.
We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics.
arXiv Detail & Related papers (2021-01-14T19:16:51Z) - Learning non-stationary Langevin dynamics from stochastic observations
of latent trajectories [0.0]
Inferring Langevin equations from data can reveal how transient dynamics of such systems give rise to their function.
Here we present a non-stationary framework for inferring the Langevin equation, which explicitly models the observation process and non-stationary latent dynamics.
Omitting any of these non-stationary components results in incorrect inference, in which erroneous features arise in the dynamics due to non-stationary data distribution.
arXiv Detail & Related papers (2020-12-29T21:22:21Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.