Capturing reduced-order quantum many-body dynamics out of equilibrium via neural ordinary differential equations
- URL: http://arxiv.org/abs/2512.13913v1
- Date: Mon, 15 Dec 2025 21:48:10 GMT
- Title: Capturing reduced-order quantum many-body dynamics out of equilibrium via neural ordinary differential equations
- Authors: Patrick Egenlauf, Iva Březinová, Sabine Andergassen, Miriam Klopotek,
- Abstract summary: We show that a neural ODE model trained on exact 2RDM data can reproduce its dynamics without any explicit three-particle information.<n>The magnitude of the time-averaged three-particle-correlation buildup appears to be the primary predictor of success.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-equilibrium quantum many-body systems exhibit rapid correlation buildup that underlies many emerging phenomena. Exact wave-function methods to describe this scale exponentially with particle number; simpler mean-field approaches neglect essential two-particle correlations. The time-dependent two-particle reduced density matrix (TD2RDM) formalism offers a middle ground by propagating the two-particle reduced density matrix (2RDM) and closing the BBGKY hierarchy with a reconstruction of the three-particle cumulant. But the validity and existence of time-local reconstruction functionals ignoring memory effects remain unclear across different dynamical regimes. We show that a neural ODE model trained on exact 2RDM data (no dimensionality reduction) can reproduce its dynamics without any explicit three-particle information -- but only in parameter regions where the Pearson correlation between the two- and three-particle cumulants is large. In the anti-correlated or uncorrelated regime, the neural ODE fails, indicating that no simple time-local functional of the instantaneous two-particle cumulant can capture the evolution. The magnitude of the time-averaged three-particle-correlation buildup appears to be the primary predictor of success: For a moderate correlation buildup, both neural ODE predictions and existing TD2RDM reconstructions are accurate, whereas stronger values lead to systematic breakdowns. These findings pinpoint the need for memory-dependent kernels in the three-particle cumulant reconstruction for the latter regime. Our results place the neural ODE as a model-agnostic diagnostic tool that maps the regime of applicability of cumulant expansion methods and guides the development of non-local closure schemes. More broadly, the ability to learn high-dimensional RDM dynamics from limited data opens a pathway to fast, data-driven simulation of correlated quantum matter.
Related papers
- On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking [49.1352577985191]
We present a comprehensive analysis of how two-layer neural networks learn features to solve the modular addition task.<n>Our work provides a full mechanistic interpretation of the learned model and a theoretical explanation of its training dynamics.
arXiv Detail & Related papers (2026-02-18T20:25:13Z) - Unrolled Networks are Conditional Probability Flows in MRI Reconstruction [13.185194525641478]
We introduce flow ODEs to MRI reconstruction by theoretically proving that unrolled networks are discrete implementations of conditional probability flow ODEs.<n>This connection provides explicit formulations for parameters and clarifies how intermediate states should evolve.<n>We propose Flow-Aligned Training (FLAT), which derives unrolled parameters from the ODE discretization and aligns intermediate reconstructions with the ideal ODE trajectory to improve stability and convergence.
arXiv Detail & Related papers (2025-12-02T18:48:10Z) - Upper Approximation Bounds for Neural Oscillators [8.075776288865907]
Theory of quantifying the capacities of neural network architectures remains a significant challenge.<n>This study considers the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron.<n>Results provide a robust theoretical foundation for the effective application of the neural oscillator in science and engineering.
arXiv Detail & Related papers (2025-11-30T18:20:40Z) - VEDA: 3D Molecular Generation via Variance-Exploding Diffusion with Annealing [4.288647933894182]
VEDA is a framework that combines variance-exploding diffusion with annealing to generate 3D structures.<n>On the QM9 and GEOM-DRUGS datasets, VEDA matches the sampling efficiency of flow-based models.<n>VEDA's generated structures are remarkably stable, as measured by their relaxation energy.
arXiv Detail & Related papers (2025-11-11T05:45:37Z) - Towards Fast Coarse-graining and Equation Discovery with Foundation Inference Models [6.403678133359229]
latent dynamics in high-dimensional recordings are often characterized by a much smaller set of effective variables.<n>Most machine learning approaches tackle these tasks jointly by training autoencoders together with models that enforce dynamical consistency.<n>We propose to decouple the two problems by leveraging the recently introduced Foundation Inference Models (FIMs)<n>A proof of concept on a double-well system with semicircle diffusion, embedded into synthetic video data, illustrates the potential of this approach for fast and reusable coarse-graining pipelines.
arXiv Detail & Related papers (2025-10-14T15:17:23Z) - PHASE-Net: Physics-Grounded Harmonic Attention System for Efficient Remote Photoplethysmography Measurement [63.007237197267834]
Existing deep learning methods are mostly physiological monitoring and lack theoretical robustness.<n>We propose a physics-informed r paradigm derived from the Navier-Stokes equations of hemodynamics, showing that the pulse signal follows a second-order system.<n>This provides a theoretical justification for using a Temporal Conal Network (TCN)<n>Phase-Net achieves state-of-the-art performance with strong efficiency, offering a theoretically grounded and deployment-ready r solution.
arXiv Detail & Related papers (2025-09-29T14:36:45Z) - Generative AI Models for Learning Flow Maps of Stochastic Dynamical Systems in Bounded Domains [7.325529913721375]
Simulating differential equations (SDEs) in bounded domains requires accurate modeling of interior dynamics and boundary interactions.<n>Existing learning methods are not applicable to SDEs in bounded domains because they cannot accurately capture the particle exit dynamics.<n>We present a unified hybrid data-driven approach that combines a conditional diffusion model with an exit prediction neural network to capture both interior dynamics and boundary exit phenomena.
arXiv Detail & Related papers (2025-07-17T13:27:49Z) - RadioDiff-$k^2$: Helmholtz Equation Informed Generative Diffusion Model for Multi-Path Aware Radio Map Construction [76.24833675757033]
We propose a physics-informed generative learning approach, named RadioDiff-$k2$, for accurate and efficient multipath-aware radio map (RM) construction.<n>We show that the proposed RadioDiff-$k2$ framework achieves state-of-the-art (SOTA) performance in both image-level RM construction and localization tasks.
arXiv Detail & Related papers (2025-04-22T06:28:13Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Continuous and time-discrete non-Markovian system-reservoir
interactions: Dissipative coherent quantum feedback in Liouville space [62.997667081978825]
We investigate a quantum system simultaneously exposed to two structured reservoirs.
We employ a numerically exact quasi-2D tensor network combining both diagonal and off-diagonal system-reservoir interactions with a twofold memory for continuous and discrete retardation effects.
As a possible example, we study the non-Markovian interplay between discrete photonic feedback and structured acoustic phononovian modes, resulting in emerging inter-reservoir correlations and long-living population trapping within an initially-excited two-level system.
arXiv Detail & Related papers (2020-11-10T12:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.