Long-time prediction of nonlinear parametrized dynamical systems by deep
learning-based reduced order models
- URL: http://arxiv.org/abs/2201.10215v1
- Date: Tue, 25 Jan 2022 10:15:17 GMT
- Title: Long-time prediction of nonlinear parametrized dynamical systems by deep
learning-based reduced order models
- Authors: Federico Fatone, Stefania Fresca, Andrea Manzoni
- Abstract summary: Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional ROMs.
This work introduces the $mu t$-POD-LSTM-ROM framework for efficient numerical approximation of parametrized PDEs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based reduced order models (DL-ROMs) have been recently
proposed to overcome common limitations shared by conventional ROMs - built,
e.g., exclusively through proper orthogonal decomposition (POD) - when applied
to nonlinear time-dependent parametrized PDEs. In particular, POD-DL-ROMs can
achieve extreme efficiency in the training stage and faster than real-time
performances at testing, thanks to a prior dimensionality reduction through POD
and a DL-based prediction framework. Nonetheless, they share with conventional
ROMs poor performances regarding time extrapolation tasks. This work aims at
taking a further step towards the use of DL algorithms for the efficient
numerical approximation of parametrized PDEs by introducing the $\mu
t$-POD-LSTM-ROM framework. This novel technique extends the POD-DL-ROM
framework by adding a two-fold architecture taking advantage of long short-term
memory (LSTM) cells, ultimately allowing long-term prediction of complex
systems' evolution, with respect to the training window, for unseen input
parameter values. Numerical results show that this recurrent architecture
enables the extrapolation for time windows up to 15 times larger than the
training time domain, and achieves better testing time performances with
respect to the already lightning-fast POD-DL-ROMs.
Related papers
- On latent dynamics learning in nonlinear reduced order modeling [0.6249768559720122]
We present the novel mathematical framework of latent dynamics models (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs.
A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution.
Deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM.
arXiv Detail & Related papers (2024-08-27T16:35:06Z) - PTPI-DL-ROMs: pre-trained physics-informed deep learning-based reduced order models for nonlinear parametrized PDEs [0.6827423171182154]
In this paper, we consider a major extension of POD-DL-ROMs by making them physics-informed.
We first complement POD-DL-ROMs with a trunk net architecture, endowing them with the ability to compute the problem's solution at every point in the spatial domain.
In particular, we take advantage of the few available data to develop a low-cost pre-training procedure.
arXiv Detail & Related papers (2024-05-14T12:46:12Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Multiplicative update rules for accelerating deep learning training and
increasing robustness [69.90473612073767]
We propose an optimization framework that fits to a wide range of machine learning algorithms and enables one to apply alternative update rules.
We claim that the proposed framework accelerates training, while leading to more robust models in contrast to traditionally used additive update rule.
arXiv Detail & Related papers (2023-07-14T06:44:43Z) - Gait Recognition in the Wild with Multi-hop Temporal Switch [81.35245014397759]
gait recognition in the wild is a more practical problem that has attracted the attention of the community of multimedia and computer vision.
This paper presents a novel multi-hop temporal switch method to achieve effective temporal modeling of gait patterns in real-world scenes.
arXiv Detail & Related papers (2022-09-01T10:46:09Z) - Deep-HyROMnet: A deep learning-based operator approximation for
hyper-reduction of nonlinear parametrized PDEs [0.0]
We propose a strategy for learning nonlinear ROM operators using deep neural networks (DNNs)
The resulting hyper-reduced order model enhanced by DNNs is referred to as Deep-HyROMnet.
Numerical results show that Deep-HyROMnets are orders of magnitude faster than POD-GalerkinDEIMs, keeping the same level of accuracy.
arXiv Detail & Related papers (2022-02-05T23:45:25Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - POD-DL-ROM: enhancing deep learning-based reduced order models for
nonlinear parametrized PDEs by proper orthogonal decomposition [0.0]
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs)
In this paper we propose a possible way to avoid an expensive training stage of DL-ROMs, by (i) performing a prior dimensionality reduction through POD, and (ii) relying on a multi-fidelity pretraining stage.
The proposed POD-DL-ROM is tested on several (both scalar and vector, linear and nonlinear) time-dependent parametrized PDEs.
arXiv Detail & Related papers (2021-01-28T07:34:15Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - HiPPO: Recurrent Memory with Optimal Polynomial Projections [93.3537706398653]
We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto bases.
Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem.
This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale.
arXiv Detail & Related papers (2020-08-17T23:39:33Z) - A comprehensive deep learning-based approach to reduced order modeling
of nonlinear time-dependent parametrized PDEs [0.0]
We show how to construct a DL-ROM for both linear and nonlinear time-dependent parametrized PDEs.
Numerical results indicate that DL-ROMs whose dimension is equal to the intrinsic dimensionality of the PDE solutions manifold are able to approximate the solution of parametrized PDEs.
arXiv Detail & Related papers (2020-01-12T21:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.