$\textit{FastSVD-ML-ROM}$: A Reduced-Order Modeling Framework based on
Machine Learning for Real-Time Applications
- URL: http://arxiv.org/abs/2207.11842v1
- Date: Sun, 24 Jul 2022 23:11:07 GMT
- Title: $\textit{FastSVD-ML-ROM}$: A Reduced-Order Modeling Framework based on
Machine Learning for Real-Time Applications
- Authors: G. I. Drakoulas, T. V. Gortsas, G. C. Bourantas, V. N. Burganos, D.
Polyzos
- Abstract summary: High-fidelity numerical simulations constitute the backbone of engineering design.
Reduced order models (ROMs) are employed to approximate the high-fidelity solutions.
The present work proposes a new machine learning (ML) platform for the development of ROMs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital twins have emerged as a key technology for optimizing the performance
of engineering products and systems. High-fidelity numerical simulations
constitute the backbone of engineering design, providing an accurate insight
into the performance of complex systems. However, large-scale, dynamic,
non-linear models require significant computational resources and are
prohibitive for real-time digital twin applications. To this end, reduced order
models (ROMs) are employed, to approximate the high-fidelity solutions while
accurately capturing the dominant aspects of the physical behavior. The present
work proposes a new machine learning (ML) platform for the development of ROMs,
to handle large-scale numerical problems dealing with transient nonlinear
partial differential equations. Our framework, mentioned as
$\textit{FastSVD-ML-ROM}$, utilizes $\textit{(i)}$ a singular value
decomposition (SVD) update methodology, to compute a linear subspace of the
multi-fidelity solutions during the simulation process, $\textit{(ii)}$
convolutional autoencoders for nonlinear dimensionality reduction,
$\textit{(iii)}$ feed-forward neural networks to map the input parameters to
the latent spaces, and $\textit{(iv)}$ long short-term memory networks to
predict and forecast the dynamics of parametric solutions. The efficiency of
the $\textit{FastSVD-ML-ROM}$ framework is demonstrated for a 2D linear
convection-diffusion equation, the problem of fluid around a cylinder, and the
3D blood flow inside an arterial segment. The accuracy of the reconstructed
results demonstrates the robustness and assesses the efficiency of the proposed
approach.
Related papers
- LC-SVD-DLinear: A low-cost physics-based hybrid machine learning model for data forecasting using sparse measurements [2.519319150166215]
This article introduces a novel methodology that integrates singular value decomposition (SVD) with a shallow linear neural network for forecasting high resolution fluid mechanics data.
We present a variant of the method, LC-HOSVD-DLinear, which combines a low-cost version of the high-order singular value decomposition algorithm with the DLinear network, designed for high-order data.
arXiv Detail & Related papers (2024-11-26T13:43:50Z) - Data-Augmented Predictive Deep Neural Network: Enhancing the extrapolation capabilities of non-intrusive surrogate models [0.5735035463793009]
We propose a new deep learning framework, where kernel dynamic mode decomposition (KDMD) is employed to evolve the dynamics of the latent space generated by the encoder part of a convolutional autoencoder (CAE)
After adding the KDMD-decoder-extrapolated data into the original data set, we train the CAE along with a feed-forward deep neural network using the augmented data.
The trained network can predict future states outside the training time interval at any out-of-training parameter samples.
arXiv Detail & Related papers (2024-10-17T09:26:14Z) - FMint: Bridging Human Designed and Data Pretrained Models for Differential Equation Foundation Model [5.748690310135373]
We propose a novel multi-modal foundation model, named textbfFMint, to bridge the gap between human-designed and data-driven models.
Built on a decoder-only transformer architecture with in-context learning, FMint utilizes both numerical and textual data to learn a universal error correction scheme.
Our results demonstrate the effectiveness of the proposed model in terms of both accuracy and efficiency compared to classical numerical solvers.
arXiv Detail & Related papers (2024-04-23T02:36:47Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Interfacing Finite Elements with Deep Neural Operators for Fast
Multiscale Modeling of Mechanics Problems [4.280301926296439]
In this work, we explore the idea of multiscale modeling with machine learning and employ DeepONet, a neural operator, as an efficient surrogate of the expensive solver.
DeepONet is trained offline using data acquired from the fine solver for learning the underlying and possibly unknown fine-scale dynamics.
We present various benchmarks to assess accuracy and speedup, and in particular we develop a coupling algorithm for a time-dependent problem.
arXiv Detail & Related papers (2022-02-25T20:46:08Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Deep Learning for Reduced Order Modelling and Efficient Temporal
Evolution of Fluid Simulations [0.0]
Reduced Order Modelling (ROM) has been widely used to create lower order, computationally inexpensive representations of higher-order dynamical systems.
We develop a novel deep learning framework DL-ROM to create a neural network capable of non-linear projections to reduced order states.
Our model DL-ROM is able to create highly accurate reconstructions from the learned ROM and is thus able to efficiently predict future time steps by temporally traversing in the learned reduced state.
arXiv Detail & Related papers (2021-07-09T17:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.