Tensor Network Fluid Simulations in Structured Domains Using the Lattice Boltzmann Method
- URL: http://arxiv.org/abs/2512.07615v2
- Date: Tue, 16 Dec 2025 13:07:59 GMT
- Title: Tensor Network Fluid Simulations in Structured Domains Using the Lattice Boltzmann Method
- Authors: Lukas Gross, Elie Mounzer, David M. Wawrzyniak, Josef M. Winter, Nikolaus A. Adams,
- Abstract summary: We introduce a tensor-network formulation of the lattice Boltzmann method based on matrix product state (MPS)<n>We show that in structured media, MPS representation yields compression ratios exceeding two orders of magnitude.<n>Our results position tensor networks as a scalable paradigm continuum for mechanics.
- Score: 1.1744028458220428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-fidelity fluid simulations are central to understanding transport phenomena, yet resolving large or geometrically complex systems remains computationally prohibitive with existing methods. Recently, methods based on tensor-networks, commonly known as a quantum-inspired approach, were proposed to efficiently simulate flow in simple domains, where complexity emerges mostly from turbulence. Here, we substantially extend the understanding of such methods by demonstrating for the first time that also flows governed by translational or approximate symmetries of the geometry exhibit very low effective complexity in matrix product state (MPS) form. To this end, we introduce a tensor-network formulation of the lattice Boltzmann method based on MPS and demonstrate the generality of the method on three-dimensional flows through structured media and complex vascular geometries, establishing that tensor-network techniques can efficiently resolve fluid dynamics in complex domains previously inaccessible to MPS approaches. We show that in structured media, MPS representation yields compression ratios exceeding two orders of magnitude while preserving physical structure and dynamical fidelity. This reduction enables systematic numerical exploration of regimes that were previously intractable. Our results position tensor networks as a scalable paradigm for continuum mechanics.
Related papers
- Emergent Manifold Separability during Reasoning in Large Language Models [46.78826734548872]
Chain-of-Thought prompting significantly improves reasoning in Large Language Models.<n>We quantify the linear separability of latent representations without the confounding factors of probe training.
arXiv Detail & Related papers (2026-02-23T20:36:17Z) - Tensor Network Framework for Forecasting Nonlinear and Chaotic Dynamics [1.790605517028706]
We present a tensor network model (TNM) for forecasting nonlinear and chaotic dynamics.<n>We show that the TNM accurately reconstructs short-term trajectories and faithfully captures the attractor geometry.
arXiv Detail & Related papers (2025-11-12T11:49:38Z) - Efficient Approximation of Volterra Series for High-Dimensional Systems [0.0]
We introduce the Head Averaging (THA) algorithm, which significantly reduces complexity by constructing localized MVMALS models trained on small subsets of the input space.<n>THA offers a scalable and theoretically grounded approach for identifying previously intractable high-dimensional systems.
arXiv Detail & Related papers (2025-11-09T20:31:39Z) - Topological Regularization for Force Prediction in Active Particle Suspension with EGNN and Persistent Homology [0.0]
We present a multi-scale framework that combines the three learning-driven tools to learn in concert within one pipeline.<n>We use high-resolution Lattice Boltzmann snapshots of fluid velocity and particle stresses in a periodic box as input to the learning pipeline.
arXiv Detail & Related papers (2025-09-08T11:39:42Z) - Connectivity matters: Impact of bath modes ordering and geometry in open quantum system simulation with Tensor Network States [0.0]
tensor network-based methods are state-of-the-art approaches for performing numerically exact simulations.<n>We show for canonical model Hamiltonians that simple orderings of bosonic environmental modes, which enable the joint System + Environments state to be written as a matrix product state, considerably reduce the bond dimension required for convergence.
arXiv Detail & Related papers (2024-09-06T09:20:08Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Open Quantum System Dynamics from Infinite Tensor Network Contraction [0.0]
We show that for Gaussian environments highly efficient contraction to matrix product operator (MPO) form can be achieved with infinite MPO evolution methods.
We show that for Gaussian environments highly efficient contraction to matrix product operator (MPO) form can be achieved with infinite MPO evolution methods.
arXiv Detail & Related papers (2023-07-04T16:12:03Z) - Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes [37.69303106863453]
We present a hybrid quantum physics-informed neural network that simulates laminar fluid flows in 3D Y-shaped mixers.
Our approach combines the expressive power of a quantum model with the flexibility of a physics-informed neural network, resulting in a 21% higher accuracy compared to a purely classical neural network.
arXiv Detail & Related papers (2023-04-21T20:49:29Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Bayesian Inference of Stochastic Dynamical Networks [0.0]
This paper presents a novel method for learning network topology and internal dynamics.
It is compared with group sparse Bayesian learning (GSBL), BINGO, kernel-based methods, dynGENIE3, GENIE3 and ARNI.
Our method achieves state-of-the-art performance compared with group sparse Bayesian learning (GSBL), BINGO, kernel-based methods, dynGENIE3, GENIE3 and ARNI.
arXiv Detail & Related papers (2022-06-02T03:22:34Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - A Multisite Decomposition of the Tensor Network Path Integrals [0.0]
We extend the tensor network path integral (TNPI) framework to efficiently simulate quantum systems with local dissipative environments.
The MS-TNPI method is useful for studying a variety of extended quantum systems coupled with solvents.
arXiv Detail & Related papers (2021-09-20T17:55:53Z) - A tensor network representation of path integrals: Implementation and
analysis [0.0]
We introduce a novel tensor network-based decomposition of path integral simulations involving Feynman-Vernon influence functional.
The finite temporarily non-local interactions introduced by the influence functional can be captured very efficiently using matrix product state representation.
The flexibility of the AP-TNPI framework makes it a promising new addition to the family of path integral methods for non-equilibrium quantum dynamics.
arXiv Detail & Related papers (2021-06-23T16:41:54Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.