Deep-HyROMnet: A deep learning-based operator approximation for
hyper-reduction of nonlinear parametrized PDEs
- URL: http://arxiv.org/abs/2202.02658v1
- Date: Sat, 5 Feb 2022 23:45:25 GMT
- Title: Deep-HyROMnet: A deep learning-based operator approximation for
hyper-reduction of nonlinear parametrized PDEs
- Authors: Ludovica Cicci, Stefania Fresca, Andrea Manzoni
- Abstract summary: We propose a strategy for learning nonlinear ROM operators using deep neural networks (DNNs)
The resulting hyper-reduced order model enhanced by DNNs is referred to as Deep-HyROMnet.
Numerical results show that Deep-HyROMnets are orders of magnitude faster than POD-GalerkinDEIMs, keeping the same level of accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To speed-up the solution to parametrized differential problems, reduced order
models (ROMs) have been developed over the years, including projection-based
ROMs such as the reduced-basis (RB) method, deep learning-based ROMs, as well
as surrogate models obtained via a machine learning approach. Thanks to its
physics-based structure, ensured by the use of a Galerkin projection of the
full order model (FOM) onto a linear low-dimensional subspace, RB methods yield
approximations that fulfill the physical problem at hand. However, to make the
assembling of a ROM independent of the FOM dimension, intrusive and expensive
hyper-reduction stages are usually required, such as the discrete empirical
interpolation method (DEIM), thus making this strategy less feasible for
problems characterized by (high-order polynomial or nonpolynomial)
nonlinearities. To overcome this bottleneck, we propose a novel strategy for
learning nonlinear ROM operators using deep neural networks (DNNs). The
resulting hyper-reduced order model enhanced by deep neural networks, to which
we refer to as Deep-HyROMnet, is then a physics-based model, still relying on
the RB method approach, however employing a DNN architecture to approximate
reduced residual vectors and Jacobian matrices once a Galerkin projection has
been performed. Numerical results dealing with fast simulations in nonlinear
structural mechanics show that Deep-HyROMnets are orders of magnitude faster
than POD-Galerkin-DEIM ROMs, keeping the same level of accuracy.
Related papers
- Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Reduced order modeling with Barlow Twins self-supervised learning:
Navigating the space between linear and nonlinear solution manifolds [0.0]
The proposed framework relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning.
We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches.
arXiv Detail & Related papers (2022-02-11T05:41:33Z) - POD-DL-ROM: enhancing deep learning-based reduced order models for
nonlinear parametrized PDEs by proper orthogonal decomposition [0.0]
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs)
In this paper we propose a possible way to avoid an expensive training stage of DL-ROMs, by (i) performing a prior dimensionality reduction through POD, and (ii) relying on a multi-fidelity pretraining stage.
The proposed POD-DL-ROM is tested on several (both scalar and vector, linear and nonlinear) time-dependent parametrized PDEs.
arXiv Detail & Related papers (2021-01-28T07:34:15Z) - A fast and accurate physics-informed neural network reduced order model
with shallow masked autoencoder [0.19116784879310023]
nonlinear manifold ROM (NM-ROM) can better approximate high-fidelity model solutions with a smaller latent space dimension than the LS-ROMs.
Results show that neural networks can learn a more efficient latent space representation on advection-dominated data.
arXiv Detail & Related papers (2020-09-25T00:48:19Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - A comprehensive deep learning-based approach to reduced order modeling
of nonlinear time-dependent parametrized PDEs [0.0]
We show how to construct a DL-ROM for both linear and nonlinear time-dependent parametrized PDEs.
Numerical results indicate that DL-ROMs whose dimension is equal to the intrinsic dimensionality of the PDE solutions manifold are able to approximate the solution of parametrized PDEs.
arXiv Detail & Related papers (2020-01-12T21:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.