$\textit{FastSVD-ML-ROM}$: A Reduced-Order Modeling Framework based on
Machine Learning for Real-Time Applications
- URL: http://arxiv.org/abs/2207.11842v1
- Date: Sun, 24 Jul 2022 23:11:07 GMT
- Title: $\textit{FastSVD-ML-ROM}$: A Reduced-Order Modeling Framework based on
Machine Learning for Real-Time Applications
- Authors: G. I. Drakoulas, T. V. Gortsas, G. C. Bourantas, V. N. Burganos, D.
Polyzos
- Abstract summary: High-fidelity numerical simulations constitute the backbone of engineering design.
Reduced order models (ROMs) are employed to approximate the high-fidelity solutions.
The present work proposes a new machine learning (ML) platform for the development of ROMs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital twins have emerged as a key technology for optimizing the performance
of engineering products and systems. High-fidelity numerical simulations
constitute the backbone of engineering design, providing an accurate insight
into the performance of complex systems. However, large-scale, dynamic,
non-linear models require significant computational resources and are
prohibitive for real-time digital twin applications. To this end, reduced order
models (ROMs) are employed, to approximate the high-fidelity solutions while
accurately capturing the dominant aspects of the physical behavior. The present
work proposes a new machine learning (ML) platform for the development of ROMs,
to handle large-scale numerical problems dealing with transient nonlinear
partial differential equations. Our framework, mentioned as
$\textit{FastSVD-ML-ROM}$, utilizes $\textit{(i)}$ a singular value
decomposition (SVD) update methodology, to compute a linear subspace of the
multi-fidelity solutions during the simulation process, $\textit{(ii)}$
convolutional autoencoders for nonlinear dimensionality reduction,
$\textit{(iii)}$ feed-forward neural networks to map the input parameters to
the latent spaces, and $\textit{(iv)}$ long short-term memory networks to
predict and forecast the dynamics of parametric solutions. The efficiency of
the $\textit{FastSVD-ML-ROM}$ framework is demonstrated for a 2D linear
convection-diffusion equation, the problem of fluid around a cylinder, and the
3D blood flow inside an arterial segment. The accuracy of the reconstructed
results demonstrates the robustness and assesses the efficiency of the proposed
approach.
Related papers
- Multilinear Kernel Regression and Imputation via Manifold Learning [5.482532589225551]
MultiL-KRIM builds on the intuitive concept of spaces to tangent and incorporates collaboration among point-cloud neighbors (regressors) directly into the data-modeling term of the loss function.
Two important application domains showcase the functionality of MultiL-KRIM: time-varying-graph-signal (TVGS) recovery, and reconstruction of highly accelerated dynamic-magnetic-resonance-imaging (dMRI) data.
arXiv Detail & Related papers (2024-02-06T02:50:42Z) - Better Neural PDE Solvers Through Data-Free Mesh Movers [13.013830215107735]
We develop a moving mesh based neural PDE solver (MM-PDE) that embeds the moving mesh with a two-branch architecture.
Our method generates suitable meshes and considerably enhances accuracy when modeling widely considered PDE systems.
arXiv Detail & Related papers (2023-12-09T14:05:28Z) - Differentiable Turbulence II [0.0]
We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations.
We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Interfacing Finite Elements with Deep Neural Operators for Fast
Multiscale Modeling of Mechanics Problems [4.280301926296439]
In this work, we explore the idea of multiscale modeling with machine learning and employ DeepONet, a neural operator, as an efficient surrogate of the expensive solver.
DeepONet is trained offline using data acquired from the fine solver for learning the underlying and possibly unknown fine-scale dynamics.
We present various benchmarks to assess accuracy and speedup, and in particular we develop a coupling algorithm for a time-dependent problem.
arXiv Detail & Related papers (2022-02-25T20:46:08Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Deep Learning for Reduced Order Modelling and Efficient Temporal
Evolution of Fluid Simulations [0.0]
Reduced Order Modelling (ROM) has been widely used to create lower order, computationally inexpensive representations of higher-order dynamical systems.
We develop a novel deep learning framework DL-ROM to create a neural network capable of non-linear projections to reduced order states.
Our model DL-ROM is able to create highly accurate reconstructions from the learned ROM and is thus able to efficiently predict future time steps by temporally traversing in the learned reduced state.
arXiv Detail & Related papers (2021-07-09T17:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.