A comprehensive deep learning-based approach to reduced order modeling
of nonlinear time-dependent parametrized PDEs
- URL: http://arxiv.org/abs/2001.04001v1
- Date: Sun, 12 Jan 2020 21:18:18 GMT
- Title: A comprehensive deep learning-based approach to reduced order modeling
of nonlinear time-dependent parametrized PDEs
- Authors: Stefania Fresca, Luca Dede, Andrea Manzoni
- Abstract summary: We show how to construct a DL-ROM for both linear and nonlinear time-dependent parametrized PDEs.
Numerical results indicate that DL-ROMs whose dimension is equal to the intrinsic dimensionality of the PDE solutions manifold are able to approximate the solution of parametrized PDEs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional reduced order modeling techniques such as the reduced basis (RB)
method (relying, e.g., on proper orthogonal decomposition (POD)) suffer from
severe limitations when dealing with nonlinear time-dependent parametrized
PDEs, because of the fundamental assumption of linear superimposition of modes
they are based on. For this reason, in the case of problems featuring coherent
structures that propagate over time such as transport, wave, or
convection-dominated phenomena, the RB method usually yields inefficient
reduced order models (ROMs) if one aims at obtaining reduced order
approximations sufficiently accurate compared to the high-fidelity, full order
model (FOM) solution. To overcome these limitations, in this work, we propose a
new nonlinear approach to set reduced order models by exploiting deep learning
(DL) algorithms. In the resulting nonlinear ROM, which we refer to as DL-ROM,
both the nonlinear trial manifold (corresponding to the set of basis functions
in a linear ROM) as well as the nonlinear reduced dynamics (corresponding to
the projection stage in a linear ROM) are learned in a non-intrusive way by
relying on DL algorithms; the latter are trained on a set of FOM solutions
obtained for different parameter values. In this paper, we show how to
construct a DL-ROM for both linear and nonlinear time-dependent parametrized
PDEs; moreover, we assess its accuracy on test cases featuring different
parametrized PDE problems. Numerical results indicate that DL-ROMs whose
dimension is equal to the intrinsic dimensionality of the PDE solutions
manifold are able to approximate the solution of parametrized PDEs in
situations where a huge number of POD modes would be necessary to achieve the
same degree of accuracy.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Two-Stage ML-Guided Decision Rules for Sequential Decision Making under Uncertainty [55.06411438416805]
Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains.
Some SDMU are naturally modeled as Multistage Problems (MSPs) but the resulting optimizations are notoriously challenging from a computational standpoint.
This paper introduces a novel approach Two-Stage General Decision Rules (TS-GDR) to generalize the policy space beyond linear functions.
The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-LDR)
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - Provably Efficient Algorithm for Nonstationary Low-Rank MDPs [48.92657638730582]
We make the first effort to investigate nonstationary RL under episodic low-rank MDPs, where both transition kernels and rewards may vary over time.
We propose a parameter-dependent policy optimization algorithm called PORTAL, and further improve PORTAL to its parameter-free version of Ada-PORTAL.
For both algorithms, we provide upper bounds on the average dynamic suboptimality gap, which show that as long as the nonstationarity is not significantly large, PORTAL and Ada-PORTAL are sample-efficient and can achieve arbitrarily small average dynamic suboptimality gap with sample complexity.
arXiv Detail & Related papers (2023-08-10T09:52:44Z) - A graph convolutional autoencoder approach to model order reduction for
parametrized PDEs [0.8192907805418583]
The present work proposes a framework for nonlinear model order reduction based on a Graph Convolutional Autoencoder (GCA-ROM)
We develop a non-intrusive and data-driven nonlinear reduction approach, exploiting GNNs to encode the reduced manifold and enable fast evaluations of parametrized PDEs.
arXiv Detail & Related papers (2023-05-15T12:01:22Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Reduced order modeling with Barlow Twins self-supervised learning:
Navigating the space between linear and nonlinear solution manifolds [0.0]
The proposed framework relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning.
We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches.
arXiv Detail & Related papers (2022-02-11T05:41:33Z) - Deep-HyROMnet: A deep learning-based operator approximation for
hyper-reduction of nonlinear parametrized PDEs [0.0]
We propose a strategy for learning nonlinear ROM operators using deep neural networks (DNNs)
The resulting hyper-reduced order model enhanced by DNNs is referred to as Deep-HyROMnet.
Numerical results show that Deep-HyROMnets are orders of magnitude faster than POD-GalerkinDEIMs, keeping the same level of accuracy.
arXiv Detail & Related papers (2022-02-05T23:45:25Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - POD-DL-ROM: enhancing deep learning-based reduced order models for
nonlinear parametrized PDEs by proper orthogonal decomposition [0.0]
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs)
In this paper we propose a possible way to avoid an expensive training stage of DL-ROMs, by (i) performing a prior dimensionality reduction through POD, and (ii) relying on a multi-fidelity pretraining stage.
The proposed POD-DL-ROM is tested on several (both scalar and vector, linear and nonlinear) time-dependent parametrized PDEs.
arXiv Detail & Related papers (2021-01-28T07:34:15Z) - STENCIL-NET: Data-driven solution-adaptive discretization of partial
differential equations [2.362412515574206]
We present STENCIL-NET, an artificial neural network architecture for data-driven learning of problem- and resolution-specific local discretizations of nonlinear PDEs.
Knowing the actual PDE is not necessary, as solution data is sufficient to train the network to learn the discrete operators.
A once-trained STENCIL-NET model can be used to predict solutions of the PDE on larger domains and for longer times than it was trained for.
arXiv Detail & Related papers (2021-01-15T15:43:41Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.