Deep Neural Networks for Nonlinear Model Order Reduction of Unsteady
Flows
- URL: http://arxiv.org/abs/2007.00936v3
- Date: Fri, 2 Oct 2020 17:20:11 GMT
- Title: Deep Neural Networks for Nonlinear Model Order Reduction of Unsteady
Flows
- Authors: Hamidreza Eivazi, Hadi Veisi, Mohammad Hossein Naderi, Vahid
Esfahanian
- Abstract summary: Reduced Order Modeling (ROM) of fluid flows has been an active research topic in the recent decade.
In this work, a novel data-driven technique based on the power of deep neural networks for reduced order modeling of the unsteady fluid flows is introduced.
- Score: 1.8065361710947976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsteady fluid systems are nonlinear high-dimensional dynamical systems that
may exhibit multiple complex phenomena both in time and space. Reduced Order
Modeling (ROM) of fluid flows has been an active research topic in the recent
decade with the primary goal to decompose complex flows to a set of features
most important for future state prediction and control, typically using a
dimensionality reduction technique. In this work, a novel data-driven technique
based on the power of deep neural networks for reduced order modeling of the
unsteady fluid flows is introduced. An autoencoder network is used for
nonlinear dimension reduction and feature extraction as an alternative for
singular value decomposition (SVD). Then, the extracted features are used as an
input for long short-term memory network (LSTM) to predict the velocity field
at future time instances. The proposed autoencoder-LSTM method is compared with
non-intrusive reduced order models based on dynamic mode decomposition (DMD)
and proper orthogonal decomposition (POD). Moreover, an autoencoder-DMD
algorithm is introduced for reduced order modeling, which uses the autoencoder
network for dimensionality reduction rather than SVD rank truncation. Results
show that the autoencoder-LSTM method is considerably capable of predicting
fluid flow evolution, where higher values for coefficient of determination
$R^{2}$ are obtained using autoencoder-LSTM compared to other models.
Related papers
- DA-Flow: Dual Attention Normalizing Flow for Skeleton-based Video Anomaly Detection [52.74152717667157]
We propose a lightweight module called Dual Attention Module (DAM) for capturing cross-dimension interaction relationships in-temporal skeletal data.
It employs the frame attention mechanism to identify the most significant frames and the skeleton attention mechanism to capture broader relationships across fixed partitions with minimal parameters and flops.
arXiv Detail & Related papers (2024-06-05T06:18:03Z) - Generalization capabilities and robustness of hybrid machine learning models grounded in flow physics compared to purely deep learning models [2.8686437689115363]
This study investigates the generalization capabilities and robustness of purely deep learning (DL) models and hybrid models based on physical principles in fluid dynamics applications.
Three autoregressive models were compared: a convolutional autoencoder combined with a convolutional LSTM, a variational autoencoder (VAE) combined with a ConvLSTM and a hybrid model that combines proper decomposition (POD) with a LSTM (POD-DL)
While the VAE and ConvLSTM models accurately predicted laminar flow, the hybrid POD-DL model outperformed the others across both laminar and turbulent flow regimes.
arXiv Detail & Related papers (2024-04-27T12:43:02Z) - Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical
Systems using Constrained Autoencoders [0.0]
We introduce a class of nonlinear projections described by constrained autoencoder neural networks in which both the manifold and the projection fibers are learned from data.
Our architecture uses invertible activation functions and biorthogonal weight matrices to ensure that the encoder is a left inverse of the decoder.
We also introduce new dynamics-aware cost functions that promote learning of oblique projection fibers that account for fast dynamics and nonnormality.
arXiv Detail & Related papers (2023-07-28T04:01:48Z) - Data-driven Nonlinear Parametric Model Order Reduction Framework using
Deep Hierarchical Variational Autoencoder [5.521324490427243]
Data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed.
LSH-VAE is capable of performing nonlinear MOR for the parametric of a nonlinear dynamic system with a significant number of degrees of freedom.
arXiv Detail & Related papers (2023-07-10T02:44:53Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Parameterization of Forced Isotropic Turbulent Flow using Autoencoders
and Generative Adversarial Networks [0.45935798913942893]
Autoencoders and generative neural network models have recently gained popularity in fluid mechanics.
In this study, forced isotropic turbulence flow is generated by parameterizing into some basic statistical characteristics.
The use of neural network-based architecture removes the need for dependency on the classical mesh-based Navier-Stoke equation estimation.
arXiv Detail & Related papers (2021-07-08T18:37:38Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Learning to Encode Position for Transformer with Continuous Dynamical
Model [88.69870971415591]
We introduce a new way of learning to encode position information for non-recurrent models, such as Transformer models.
We model the evolution of encoded results along position index by such a dynamical system.
arXiv Detail & Related papers (2020-03-13T00:41:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.