Equilibrium flow: From Snapshots to Dynamics
- URL: http://arxiv.org/abs/2509.17990v1
- Date: Mon, 22 Sep 2025 16:33:20 GMT
- Title: Equilibrium flow: From Snapshots to Dynamics
- Authors: Yanbo Zhang, Michael Levin,
- Abstract summary: We introduce the Equilibrium flow method, a framework that learns continuous dynamics that preserve a given pattern distribution.<n>For high-dimensional Turing patterns from the Gray-Scott model, we develop an efficient, training-free variant that achieves high fidelity to the ground truth.<n>This capability extends beyond recovering known systems, enabling a new paradigm of inverse design for Artificial Life.
- Score: 4.741100658955037
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Scientific data, from cellular snapshots in biology to celestial distributions in cosmology, often consists of static patterns from underlying dynamical systems. These snapshots, while lacking temporal ordering, implicitly encode the processes that preserve them. This work investigates how strongly such a distribution constrains its underlying dynamics and how to recover them. We introduce the Equilibrium flow method, a framework that learns continuous dynamics that preserve a given pattern distribution. Our method successfully identifies plausible dynamics for 2-D systems and recovers the signature chaotic behavior of the Lorenz attractor. For high-dimensional Turing patterns from the Gray-Scott model, we develop an efficient, training-free variant that achieves high fidelity to the ground truth, validated both quantitatively and qualitatively. Our analysis reveals the solution space is constrained not only by the data but also by the learning model's inductive biases. This capability extends beyond recovering known systems, enabling a new paradigm of inverse design for Artificial Life. By specifying a target pattern distribution, we can discover the local interaction rules that preserve it, leading to the spontaneous emergence of complex behaviors, such as life-like flocking, attraction, and repulsion patterns, from simple, user-defined snapshots.
Related papers
- Causal Structure Learning for Dynamical Systems with Theoretical Score Analysis [7.847876045564289]
Real world systems evolve in continuous-time according to their underlying causal relationships, yet their dynamics are often unknown.<n>We propose CaDyT, a novel method for causal discovery on dynamical systems.<n>Our experiments show that CaDyT outperforms state-of-the-art methods on both regularly and irregularly-sampled data.
arXiv Detail & Related papers (2025-12-16T12:41:22Z) - Curly Flow Matching for Learning Non-gradient Field Dynamics [49.480209466896035]
We introduce Curly Flow Matching (Curly-FM), a novel approach to learning non-gradient field dynamics.<n>Curly-FM is capable of learning non-gradient field dynamics by designing and solving a Schr"odinger bridge problem.<n>Curly-FM can learn trajectories that better match both the reference process and population marginals.
arXiv Detail & Related papers (2025-10-30T16:11:39Z) - Identifiable learning of dissipative dynamics [25.409059056398124]
We introduce I-OnsagerNet, a neural framework that learns dissipative dynamics directly from trajectories.<n>I-OnsagerNet extends the Onsager principle to guarantee that the learned potential is obtained from the stationary density.<n>Our approach enables us to calculate the entropy production and to quantify irreversibility, offering a principled way to detect and quantify deviations from equilibrium.
arXiv Detail & Related papers (2025-10-28T07:57:14Z) - Data-driven particle dynamics: Structure-preserving coarse-graining for emergent behavior in non-equilibrium systems [0.8796261172196743]
Multiscale systems are notoriously challenging to simulate as shorttemporal scales must be appropriately linked to emergent bulk physics.<n>We propose a framework using the metriplectic bracket formalism that preserves discrete notions of the first and second laws of thermodynamics.<n>We provide open-source implementations in both PyTorch and LAMMPS, enabling large-scale inference and rearrangement to diverse particle-based systems.
arXiv Detail & Related papers (2025-08-18T02:10:18Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models [71.63194926457119]
We introduce Dynamical Diffusion (DyDiff), a theoretically sound framework that incorporates temporally aware forward and reverse processes.<n>Experiments across scientifictemporal forecasting, video prediction, and time series forecasting demonstrate that Dynamical Diffusion consistently improves performance in temporal predictive tasks.
arXiv Detail & Related papers (2025-03-02T16:10:32Z) - Identifiable Representation and Model Learning for Latent Dynamic Systems [0.0]
We study the problem of identifiable representation and model learning for latent dynamic systems.<n>We prove that, for linear and affine nonlinear latent dynamic systems with sparse input matrices, it is possible to identify the latent variables up to scaling.
arXiv Detail & Related papers (2024-10-23T13:55:42Z) - Latent Traversals in Generative Models as Potential Flows [113.4232528843775]
We propose to model latent structures with a learned dynamic potential landscape.
Inspired by physics, optimal transport, and neuroscience, these potential landscapes are learned as physically realistic partial differential equations.
Our method achieves both more qualitatively and quantitatively disentangled trajectories than state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T15:53:45Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Discovering dynamical features of Hodgkin-Huxley-type model of
physiological neuron using artificial neural network [0.0]
We consider Hodgkin-Huxley-type system with two fast and one slow variables.
For these two systems we create artificial neural networks that are able to reproduce their dynamics.
For the bistable model it means that the network being trained only on one brunch of the solutions recovers another without seeing it during the training.
arXiv Detail & Related papers (2022-03-26T19:04:19Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - The entanglement membrane in chaotic many-body systems [0.0]
In certain analytically-tractable quantum chaotic systems, the calculation of out-of-time-order correlation functions, entanglement entropies after a quench, and other related dynamical observables, reduces to an effective theory of an entanglement membrane'' in spacetime.
We show here how to make sense of this membrane in more realistic models, which do not involve an average over random unitaries.
arXiv Detail & Related papers (2019-12-27T19:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.