CD-ROM: Complemented Deep-Reduced Order Model
- URL: http://arxiv.org/abs/2202.10746v4
- Date: Tue, 2 May 2023 13:58:28 GMT
- Title: CD-ROM: Complemented Deep-Reduced Order Model
- Authors: Emmanuel Menier, Michele Alessandro Bucci, Mouadh Yagoubi, Lionel
Mathelin, Marc Schoenauer
- Abstract summary: This paper proposes a deep learning based closure modeling approach for classical POD-Galerkin reduced order models (ROM)
The proposed approach is theoretically grounded, using neural networks to approximate well studied operators.
The capabilities of the CD-ROM approach are demonstrated on two classical examples from Computational Fluid Dynamics, as well as a parametric case, the Kuramoto-Sivashinsky equation.
- Score: 2.02258267891574
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Model order reduction through the POD-Galerkin method can lead to dramatic
gains in terms of computational efficiency in solving physical problems.
However, the applicability of the method to non linear high-dimensional
dynamical systems such as the Navier-Stokes equations has been shown to be
limited, producing inaccurate and sometimes unstable models. This paper
proposes a deep learning based closure modeling approach for classical
POD-Galerkin reduced order models (ROM). The proposed approach is theoretically
grounded, using neural networks to approximate well studied operators. In
contrast with most previous works, the present CD-ROM approach is based on an
interpretable continuous memory formulation, derived from simple hypotheses on
the behavior of partially observed dynamical systems. The final corrected
models can hence be simulated using most classical time stepping schemes. The
capabilities of the CD-ROM approach are demonstrated on two classical examples
from Computational Fluid Dynamics, as well as a parametric case, the
Kuramoto-Sivashinsky equation.
Related papers
- Model Order Reduction for Open Quantum Systems Based on Measurement-adapted Time-coarse Graining [9.507267560064669]
We present a model order reduction technique to reduce the time complexity of open quantum systems.
The method organizes corrections to the lowest-order model which aligns with the RWA Hamiltonian in certain limits.
We derive the fourth-order EQME for a challenging problem related to the dynamics of a superconducting qubit.
arXiv Detail & Related papers (2024-10-30T15:26:42Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Physics-guided weak-form discovery of reduced-order models for trapped ultracold hydrodynamics [0.0]
We study the relaxation of a highly collisional, ultracold but nondegenerate gas of polar molecules.
The gas is subject to fluid-gas coupled dynamics that lead to a breakdown of first-order hydrodynamics.
We present substantially improved reduced-order models for these same observables.
arXiv Detail & Related papers (2024-06-11T17:50:04Z) - Predicting Ordinary Differential Equations with Transformers [65.07437364102931]
We develop a transformer-based sequence-to-sequence model that recovers scalar ordinary differential equations (ODEs) in symbolic form from irregularly sampled and noisy observations of a single solution trajectory.
Our method is efficiently scalable: after one-time pretraining on a large set of ODEs, we can infer the governing law of a new observed solution in a few forward passes of the model.
arXiv Detail & Related papers (2023-07-24T08:46:12Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Symplectic model reduction of Hamiltonian systems using data-driven
quadratic manifolds [0.559239450391449]
We present two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems.
The addition of quadratic terms to the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality.
arXiv Detail & Related papers (2023-05-24T18:23:25Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Neural Ordinary Differential Equations for Data-Driven Reduced Order
Modeling of Environmental Hydrodynamics [4.547988283172179]
We explore the use of Neural Ordinary Differential Equations for fluid flow simulation.
Test problems we consider include incompressible flow around a cylinder and real-world applications of shallow water hydrodynamics in riverine and estuarine systems.
Our findings indicate that Neural ODEs provide an elegant framework for stable and accurate evolution of latent-space dynamics with a promising potential of extrapolatory predictions.
arXiv Detail & Related papers (2021-04-22T19:20:47Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.