Deep Learning for Efficient Reconstruction of High-Resolution Turbulent
DNS Data
- URL: http://arxiv.org/abs/2010.11348v2
- Date: Mon, 15 Mar 2021 10:05:55 GMT
- Title: Deep Learning for Efficient Reconstruction of High-Resolution Turbulent
DNS Data
- Authors: Pranshu Pant, Amir Barati Farimani
- Abstract summary: Large Eddy Simulation (LES) presents a more computationally efficient approach for solving fluid flows on lower-resolution (LR) grids.
We introduce a novel deep learning framework SR-DNS Net, which aims to mitigate this inherent trade-off between solution fidelity and computational complexity.
Our model efficiently reconstructs the high-fidelity DNS data from the LES like low-resolution solutions while yielding good reconstruction metrics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Within the domain of Computational Fluid Dynamics, Direct Numerical
Simulation (DNS) is used to obtain highly accurate numerical solutions for
fluid flows. However, this approach for numerically solving the Navier-Stokes
equations is extremely computationally expensive mostly due to the requirement
of greatly refined grids. Large Eddy Simulation (LES) presents a more
computationally efficient approach for solving fluid flows on lower-resolution
(LR) grids but results in an overall reduction in solution fidelity. Through
this paper, we introduce a novel deep learning framework SR-DNS Net, which aims
to mitigate this inherent trade-off between solution fidelity and computational
complexity by leveraging deep learning techniques used in image
super-resolution. Using our model, we wish to learn the mapping from a coarser
LR solution to a refined high-resolution (HR) DNS solution so as to eliminate
the need for performing DNS on highly refined grids. Our model efficiently
reconstructs the high-fidelity DNS data from the LES like low-resolution
solutions while yielding good reconstruction metrics. Thus our implementation
improves the solution accuracy of LR solutions while incurring only a marginal
increase in computational cost required for deploying the trained deep learning
model.
Related papers
- DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Reducing the Need for Backpropagation and Discovering Better Optima With
Explicit Optimizations of Neural Networks [4.807347156077897]
We propose a computationally efficient alternative for optimizing neural networks.
We derive an explicit solution to a simple feed-forward language model.
We show that explicit solutions perform near-optimality in experiments.
arXiv Detail & Related papers (2023-11-13T17:38:07Z) - An Operator Learning Framework for Spatiotemporal Super-resolution of Scientific Simulations [3.921076451326108]
The Super Resolution Operator Network (SRNet) frames super-resolution as an operator learning problem.
It draws inspiration from existing operator learning problems to learn continuous representations of parametric differential equations from low-resolution approximations.
No restrictions are imposed on the locations of sensors at which the low-resolution approximations are provided.
arXiv Detail & Related papers (2023-11-04T05:33:23Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Reconstructing High-resolution Turbulent Flows Using Physics-Guided
Neural Networks [3.9548535445908928]
Direct numerical simulation (DNS) of turbulent flows is computationally expensive and cannot be applied to flows with large Reynolds numbers.
Large eddy simulation (LES) is an alternative that is computationally less demanding, but is unable to capture all of the scales of turbulent transport accurately.
We build a new data-driven methodology based on super-resolution techniques to reconstruct DNS data from LES predictions.
arXiv Detail & Related papers (2021-09-06T03:01:24Z) - Deep Iterative Residual Convolutional Network for Single Image
Super-Resolution [31.934084942626257]
We propose a deep Iterative Super-Resolution Residual Convolutional Network (ISRResCNet)
It exploits the powerful image regularization and large-scale optimization techniques by training the deep network in an iterative manner with a residual learning approach.
Our method with a few trainable parameters improves the results for different scaling factors in comparison with the state-of-art methods.
arXiv Detail & Related papers (2020-09-07T12:54:14Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z) - Model-Driven Beamforming Neural Networks [47.754731555563836]
This article introduces general data- and model-driven beamforming neural networks (BNNs)
It presents various possible learning strategies, and also discusses complexity reduction for the DL-based BNNs.
We also offer enhancement methods such as training-set augmentation and transfer learning in order to improve the generality of BNNs.
arXiv Detail & Related papers (2020-01-15T12:50:09Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.