Operator Inference and Physics-Informed Learning of Low-Dimensional
Models for Incompressible Flows
- URL: http://arxiv.org/abs/2010.06701v2
- Date: Mon, 7 Dec 2020 21:32:48 GMT
- Title: Operator Inference and Physics-Informed Learning of Low-Dimensional
Models for Incompressible Flows
- Authors: Peter Benner, Pawan Goyal, Jan Heiland, Igor Pontes Duff
- Abstract summary: We suggest a new approach to learning structured low-order models for incompressible flow from data.
We show that learning dynamics of the velocity and pressure can be decoupled, thus leading to an efficient operator inference approach.
- Score: 5.756349331930218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reduced-order modeling has a long tradition in computational fluid dynamics.
The ever-increasing significance of data for the synthesis of low-order models
is well reflected in the recent successes of data-driven approaches such as
Dynamic Mode Decomposition and Operator Inference. With this work, we suggest a
new approach to learning structured low-order models for incompressible flow
from data that can be used for engineering studies such as control,
optimization, and simulation. To that end, we utilize the intrinsic structure
of the Navier-Stokes equations for incompressible flows and show that learning
dynamics of the velocity and pressure can be decoupled, thus leading to an
efficient operator inference approach for learning the underlying dynamics of
incompressible flows. Furthermore, we show the operator inference performance
in learning low-order models using two benchmark problems and compare with an
intrusive method, namely proper orthogonal decomposition, and other data-driven
approaches.
Related papers
- Deep Learning for Koopman Operator Estimation in Idealized Atmospheric Dynamics [2.2489531925874013]
Deep learning is revolutionizing weather forecasting, with new data-driven models achieving accuracy on par with operational physical models for medium-term predictions.
These models often lack interpretability, making their underlying dynamics difficult to understand and explain.
This paper proposes methodologies to estimate the Koopman operator, providing a linear representation of complex nonlinear dynamics to enhance the transparency of data-driven models.
arXiv Detail & Related papers (2024-09-10T13:56:54Z) - Koopman-Based Surrogate Modelling of Turbulent Rayleigh-BĂ©nard Convection [4.248022697109535]
We use a Koopman-inspired architecture called the Linear Recurrent Autoencoder Network (LRAN) for learning reduced-order dynamics in convection flows.
A traditional fluid dynamics method, the Kernel Dynamic Mode Decomposition (KDMD) is used to compare the LRAN.
We obtained more accurate predictions with the LRAN than with KDMD in the most turbulent setting.
arXiv Detail & Related papers (2024-05-10T12:15:02Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Forecasting through deep learning and modal decomposition in two-phase
concentric jets [2.362412515574206]
This work aims to improve fuel chamber injectors' performance in turbofan engines.
It requires the development of models that allow real-time prediction and improvement of the fuel/air mixture.
arXiv Detail & Related papers (2022-12-24T12:59:41Z) - Operator inference with roll outs for learning reduced models from
scarce and low-quality data [0.0]
We propose to combine data-driven modeling via operator inference with the dynamic training via roll outs of neural ordinary differential equations.
We show that operator inference with roll outs provides predictive models from training trajectories even if data are sampled sparsely in time and polluted with noise of up to 10%.
arXiv Detail & Related papers (2022-12-02T19:41:31Z) - Active operator inference for learning low-dimensional dynamical-system
models from noisy data [0.0]
Noise poses a challenge for learning dynamical-system models because already small variations can distort the dynamics described by trajectory data.
This work builds on operator inference from scientific machine learning to infer low-dimensional models from high-dimensional state trajectories polluted with noise.
arXiv Detail & Related papers (2021-07-20T04:30:07Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Forecasting Sequential Data using Consistent Koopman Autoencoders [52.209416711500005]
A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems.
We propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics.
Key to our approach is a new analysis which explores the interplay between consistent dynamics and their associated Koopman operators.
arXiv Detail & Related papers (2020-03-04T18:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.