Quantum-Inspired Tensor Networks for Approximating PDE Flow Maps
- URL: http://arxiv.org/abs/2602.15906v1
- Date: Mon, 16 Feb 2026 10:06:19 GMT
- Title: Quantum-Inspired Tensor Networks for Approximating PDE Flow Maps
- Authors: Nahid Binandeh Dehaghani, Ban Q. Tran, Rafal Wisniewski, Susan Mengel, A. Pedro Aguiar,
- Abstract summary: We investigate quantum-inspired tensor networks (QTNs) for approximating flow maps of hydrodynamic partial differential equations (PDEs)<n>Motivated by the effective low-rank structure that emerges after tensorization, we encode PDE states as matrix product states (MPS)<n>Experiments on one- and two-dimensional linear advection-diffusion and nonlinear viscous Burgers equations demonstrate accurate short-horizon prediction, favorable scaling in smooth diffusive regimes, and error growth in nonlinear multi-step predictions.
- Score: 1.7887197093662073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate quantum-inspired tensor networks (QTNs) for approximating flow maps of hydrodynamic partial differential equations (PDEs). Motivated by the effective low-rank structure that emerges after tensorization of discretized transport and diffusion dynamics, we encode PDE states as matrix product states (MPS) and represent the evolution operator as a structured low-rank matrix product operator (MPO) in tensor-train form (e.g., arising from finite-difference discretizations assembled in MPO form). The MPO is applied directly in MPS form, and rank growth is controlled via canonicalization and SVD-based truncation after each step. We provide theoretical context through standard matrix product properties, including exact MPS representability bounds, local optimality of SVD truncation, and a Lipschitz-type multi-step error propagation estimate. Experiments on one- and two-dimensional linear advection-diffusion and nonlinear viscous Burgers equations demonstrate accurate short-horizon prediction, favorable scaling in smooth diffusive regimes, and error growth in nonlinear multi-step predictions.
Related papers
- Rethinking Diffusion Models with Symmetries through Canonicalization with Applications to Molecular Graph Generation [56.361076943802594]
CanonFlow achieves state-of-the-art performance on the challenging GEOM-DRUG dataset, and the advantage remains large in few-step generation.
arXiv Detail & Related papers (2026-02-16T18:58:55Z) - High-Dimensional Limit of Stochastic Gradient Flow via Dynamical Mean-Field Theory [6.2000582635449994]
Modern machine learning models are typically trained via multi-pass gradient descent (SGD) with small batch sizes.<n>We analyze the high-dimensional dynamics of a differential equation called a emphstochastic gradient flow (SGF)<n>We show that the resulting DMFT equations recover several existing high-dimensional descriptions of SGD dynamics as special cases.
arXiv Detail & Related papers (2026-02-06T02:37:10Z) - Structure-Informed Estimation for Pilot-Limited MIMO Channels via Tensor Decomposition [51.56484100374058]
This paper formulates pilot-limited channel estimation as low-rank tensor completion from sparse observations.<n>Experiments on synthetic channels demonstrate 10-20,dB normalized mean-square error (NMSE) improvement over least-squares (LS)<n> evaluations on DeepMIMO ray-tracing channels show 24-44% additional NMSE reduction over pure tensor-based methods.
arXiv Detail & Related papers (2026-02-03T23:38:05Z) - Differentiable Inverse Modeling with Physics-Constrained Latent Diffusion for Heterogeneous Subsurface Parameter Fields [9.42765150299276]
We present a latent diffusion-based differentiable inversion method (LD-DIM) for PDE-constrained inverse problems involving high-dimensional spatially distributed coefficients.<n>LD-DIM couples a pretrained latent diffusion prior with an end-to-end differentiable numerical solver to reconstruct unknown heterogeneous parameter fields in a low-dimensional nonlinear manifold.
arXiv Detail & Related papers (2025-12-27T01:01:19Z) - Linearized Diffusion Map [0.0]
We introduce the Linearized Diffusion Map (LDM), a novel linear dimensionality reduction method constructed via a linear approximation of the diffusion-map kernel.<n>Our analysis positions LDM as a valuable new linear dimensionality reduction technique with promising theoretical and practical extensions.
arXiv Detail & Related papers (2025-07-18T11:56:41Z) - Low-Rank Tensor Recovery via Variational Schatten-p Quasi-Norm and Jacobian Regularization [49.85875869048434]
We propose a CP-based low-rank tensor function parameterized by neural networks for implicit neural representation.<n>To achieve sparser CP decomposition, we introduce a variational Schatten-p quasi-norm to prune redundant rank-1 components.<n>For smoothness, we propose a regularization term based on the spectral norm of the Jacobian and Hutchinson's trace estimator.
arXiv Detail & Related papers (2025-06-27T11:23:10Z) - Optimization-Induced Dynamics of Lipschitz Continuity in Neural Networks [7.486235601021366]
Lipschitz continuity characterizes the worst-case sensitivity of neural networks to small input perturbations.<n>We present a rigorous framework to model the temporal evolution of Lipschitz continuity during training with gradient descent.
arXiv Detail & Related papers (2025-06-23T12:49:13Z) - Towards Training Without Depth Limits: Batch Normalization Without
Gradient Explosion [83.90492831583997]
We show that a batch-normalized network can keep the optimal signal propagation properties, but avoid exploding gradients in depth.
We use a Multi-Layer Perceptron (MLP) with linear activations and batch-normalization that provably has bounded depth.
We also design an activation shaping scheme that empirically achieves the same properties for certain non-linear activations.
arXiv Detail & Related papers (2023-10-03T12:35:02Z) - Maximum-likelihood Estimators in Physics-Informed Neural Networks for
High-dimensional Inverse Problems [0.0]
Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE)
In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from to the physical model space through Taylor expansion.
arXiv Detail & Related papers (2023-04-12T17:15:07Z) - Training Deep Energy-Based Models with f-Divergence Minimization [113.97274898282343]
Deep energy-based models (EBMs) are very flexible in distribution parametrization but computationally challenging.
We propose a general variational framework termed f-EBM to train EBMs using any desired f-divergence.
Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.
arXiv Detail & Related papers (2020-03-06T23:11:13Z) - Stochastic Normalizing Flows [52.92110730286403]
We introduce normalizing flows for maximum likelihood estimation and variational inference (VI) using differential equations (SDEs)
Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated, enabling efficient training of neural SDEs.
These SDEs can be used for constructing efficient chains to sample from the underlying distribution of a given dataset.
arXiv Detail & Related papers (2020-02-21T20:47:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.