A two stages Deep Learning Architecture for Model Reduction of
Parametric Time-Dependent Problems
- URL: http://arxiv.org/abs/2301.09926v2
- Date: Wed, 25 Jan 2023 07:36:14 GMT
- Title: A two stages Deep Learning Architecture for Model Reduction of
Parametric Time-Dependent Problems
- Authors: Isabella Carla Gonnella, Martin W. Hess, Giovanni Stabile, Gianluigi
Rozza
- Abstract summary: Parametric time-dependent systems are of crucial importance in modeling real phenomena.
We present a general two-stages deep learning framework able to perform that generalization with low computational effort in time.
Results are obtained applying the framework to incompressible Navier-Stokes equations in a cavity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Parametric time-dependent systems are of a crucial importance in modeling
real phenomena, often characterized by non-linear behaviors too. Those
solutions are typically difficult to generalize in a sufficiently wide
parameter space while counting on limited computational resources available. As
such, we present a general two-stages deep learning framework able to perform
that generalization with low computational effort in time. It consists in a
separated training of two pipe-lined predictive models. At first, a certain
number of independent neural networks are trained with data-sets taken from
different subsets of the parameter space. Successively, a second predictive
model is specialized to properly combine the first-stage guesses and compute
the right predictions. Promising results are obtained applying the framework to
incompressible Navier-Stokes equations in a cavity (Rayleigh-Bernard cavity),
obtaining a 97% reduction in the computational time comparing with its
numerical resolution for a new value of the Grashof number.
Related papers
- MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)
In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.
A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Enhancing Low-Order Discontinuous Galerkin Methods with Neural Ordinary Differential Equations for Compressible Navier--Stokes Equations [0.1578515540930834]
We introduce an end-to-end differentiable framework for solving the compressible Navier-Stokes equations.
This integrated approach combines a differentiable discontinuous Galerkin solver with a neural network source term.
We demonstrate the performance of the proposed framework through two examples.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Neural parameter calibration for large-scale multi-agent models [0.7734726150561089]
We present a method to retrieve accurate probability densities for parameters using neural equations.
The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems.
arXiv Detail & Related papers (2022-09-27T17:36:26Z) - Deep Convolutional Architectures for Extrapolative Forecast in
Time-dependent Flow Problems [0.0]
Deep learning techniques are employed to model the system dynamics for advection dominated problems.
These models take as input a sequence of high-fidelity vector solutions for consecutive time-steps obtained from the PDEs.
Non-intrusive reduced-order modelling techniques such as deep auto-encoder networks are utilized to compress the high-fidelity snapshots.
arXiv Detail & Related papers (2022-09-18T03:45:56Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.