Deep Learning for Reduced Order Modelling and Efficient Temporal
Evolution of Fluid Simulations
- URL: http://arxiv.org/abs/2107.04556v1
- Date: Fri, 9 Jul 2021 17:21:53 GMT
- Title: Deep Learning for Reduced Order Modelling and Efficient Temporal
Evolution of Fluid Simulations
- Authors: Pranshu Pant, Ruchit Doshi, Pranav Bahl, Amir Barati Farimani
- Abstract summary: Reduced Order Modelling (ROM) has been widely used to create lower order, computationally inexpensive representations of higher-order dynamical systems.
We develop a novel deep learning framework DL-ROM to create a neural network capable of non-linear projections to reduced order states.
Our model DL-ROM is able to create highly accurate reconstructions from the learned ROM and is thus able to efficiently predict future time steps by temporally traversing in the learned reduced state.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reduced Order Modelling (ROM) has been widely used to create lower order,
computationally inexpensive representations of higher-order dynamical systems.
Using these representations, ROMs can efficiently model flow fields while using
significantly lesser parameters. Conventional ROMs accomplish this by linearly
projecting higher-order manifolds to lower-dimensional space using
dimensionality reduction techniques such as Proper Orthogonal Decomposition
(POD). In this work, we develop a novel deep learning framework DL-ROM (Deep
Learning - Reduced Order Modelling) to create a neural network capable of
non-linear projections to reduced order states. We then use the learned reduced
state to efficiently predict future time steps of the simulation using 3D
Autoencoder and 3D U-Net based architectures. Our model DL-ROM is able to
create highly accurate reconstructions from the learned ROM and is thus able to
efficiently predict future time steps by temporally traversing in the learned
reduced state. All of this is achieved without ground truth supervision or
needing to iteratively solve the expensive Navier-Stokes(NS) equations thereby
resulting in massive computational savings. To test the effectiveness and
performance of our approach, we evaluate our implementation on five different
Computational Fluid Dynamics (CFD) datasets using reconstruction performance
and computational runtime metrics. DL-ROM can reduce the computational runtimes
of iterative solvers by nearly two orders of magnitude while maintaining an
acceptable error threshold.
Related papers
- Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Reduced-order modeling of unsteady fluid flow using neural network ensembles [0.0]
We propose using bagging, a commonly used ensemble learning technique, to develop a fully data-driven reduced-order model framework.
The framework uses CAEs for spatial reconstruction of the full-order model and LSTM ensembles for time-series prediction.
Results show that the presented framework effectively reduces error propagation and leads to more accurate time-series prediction of latent variables at unseen points.
arXiv Detail & Related papers (2024-02-08T03:02:59Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Combined space-time reduced-order model with 3D deep convolution for
extrapolating fluid dynamics [4.984601297028257]
Deep learning-based reduced-order models have been recently shown to be effective in simulations.
In this study, we aim to improve the extrapolation capability by modifying network architecture and integrating space-time physics as an implicit bias.
To demonstrate the effectiveness of 3D convolution network, we consider a benchmark problem of the flow past a circular cylinder at laminar flow conditions.
arXiv Detail & Related papers (2022-11-01T07:14:07Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Learned Cone-Beam CT Reconstruction Using Neural Ordinary Differential
Equations [8.621792868567018]
Learned iterative reconstruction algorithms for inverse problems offer the flexibility to combine analytical knowledge about the problem with modules learned from data.
In computed tomography, extending such approaches from 2D fan-beam to 3D cone-beam data is challenging due to the prohibitively high GPU memory.
This paper proposes to use neural ordinary differential equations to solve the reconstruction problem in a residual formulation via numerical integration.
arXiv Detail & Related papers (2022-01-19T12:32:38Z) - InversionNet3D: Efficient and Scalable Learning for 3D Full Waveform
Inversion [14.574636791985968]
In this paper, we present InversionNet3D, an efficient and scalable encoder-decoder network for 3D FWI.
The proposed method employs group convolution in the encoder to establish an effective hierarchy for learning information from multiple sources.
Experiments on the 3D Kimberlina dataset demonstrate that InversionNet3D achieves lower computational cost and lower memory footprint compared to the baseline.
arXiv Detail & Related papers (2021-03-25T22:24:57Z) - FlowStep3D: Model Unrolling for Self-Supervised Scene Flow Estimation [87.74617110803189]
Estimating the 3D motion of points in a scene, known as scene flow, is a core problem in computer vision.
We present a recurrent architecture that learns a single step of an unrolled iterative alignment procedure for refining scene flow predictions.
arXiv Detail & Related papers (2020-11-19T23:23:48Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z) - DeepCFD: Efficient Steady-State Laminar Flow Approximation with Deep
Convolutional Neural Networks [5.380828749672078]
DeepCFD is a convolutional neural network (CNN) based model that efficiently approximates solutions for the problem of non-uniform steady laminar flows.
Using DeepCFD, we found a speedup of up to 3 orders of magnitude compared to the standard CFD approach at a cost of low error rates.
arXiv Detail & Related papers (2020-04-19T12:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.