Deep Neural Networks to Correct Sub-Precision Errors in CFD
- URL: http://arxiv.org/abs/2202.04233v1
- Date: Wed, 9 Feb 2022 02:32:40 GMT
- Title: Deep Neural Networks to Correct Sub-Precision Errors in CFD
- Authors: Akash Haridas, Nagabhushana Rao Vadlamani, Yuki Minamoto
- Abstract summary: Several machine learning techniques have been successful in correcting the errors arising from spatial discretization.
We employ a Convolutional Neural Network together with a fully differentiable numerical solver performing 16-bit arithmetic to learn a tightly-coupled ML-CFD hybrid solver.
Compared to the 16-bit solver, we demonstrate the efficacy of the ML-CFD hybrid solver towards reducing the error accumulation in the velocity field and improving the kinetic energy spectrum at higher frequencies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Loss of information in numerical simulations can arise from various sources
while solving discretized partial differential equations. In particular,
precision-related errors can accumulate in the quantities of interest when the
simulations are performed using low-precision 16-bit floating-point arithmetic
compared to an equivalent 64-bit simulation. Here, low-precision computation
requires much lower resources than high-precision computation. Several machine
learning (ML) techniques proposed recently have been successful in correcting
the errors arising from spatial discretization. In this work, we extend these
techniques to improve Computational Fluid Dynamics (CFD) simulations performed
using low numerical precision. We first quantify the precision related errors
accumulated in a Kolmogorov forced turbulence test case. Subsequently, we
employ a Convolutional Neural Network together with a fully differentiable
numerical solver performing 16-bit arithmetic to learn a tightly-coupled ML-CFD
hybrid solver. Compared to the 16-bit solver, we demonstrate the efficacy of
the ML-CFD hybrid solver towards reducing the error accumulation in the
velocity field and improving the kinetic energy spectrum at higher frequencies.
Related papers
- COmoving Computer Acceleration (COCA): $N$-body simulations in an emulated frame of reference [0.0]
We introduce COmoving Computer Acceleration (COCA), a hybrid framework interfacing machine learning and $N$-body simulations.
The correct physical equations of motion are solved in an emulated frame of reference, so that any emulation error is corrected by design.
COCA efficiently reduces emulation errors in particle trajectories, requiring far fewer force evaluations than running the corresponding simulation without ML.
arXiv Detail & Related papers (2024-09-03T17:27:12Z) - Speeding up and reducing memory usage for scientific machine learning
via mixed precision [3.746841257785099]
Training neural networks for partial differential equations requires large amounts of memory and computational resources.
In search of computational efficiency, training neural networks using half precision (float16) has gained substantial interest.
We explore mixed precision, which combines the float16 and float32 numerical formats to reduce memory usage and increase computational speed.
Our experiments showcase that mixed precision training not only substantially decreases training times and memory demands but also maintains model accuracy.
arXiv Detail & Related papers (2024-01-30T00:37:57Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Accelerating Part-Scale Simulation in Liquid Metal Jet Additive
Manufacturing via Operator Learning [0.0]
Part-scale predictions require many small-scale simulations.
A model describing droplet coalescence for LMJ may include coupled incompressible fluid flow, heat transfer, and phase change equations.
We apply an operator learning approach to learn a mapping between initial and final states of the droplet coalescence process.
arXiv Detail & Related papers (2022-02-02T17:24:16Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Mixed Precision of Quantization of Transformer Language Models for
Speech Recognition [67.95996816744251]
State-of-the-art neural language models represented by Transformers are becoming increasingly complex and expensive for practical applications.
Current low-bit quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of the system to quantization errors.
The optimal local precision settings are automatically learned using two techniques.
Experiments conducted on Penn Treebank (PTB) and a Switchboard corpus trained LF-MMI TDNN system.
arXiv Detail & Related papers (2021-11-29T09:57:00Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Designing Accurate Emulators for Scientific Processes using
Calibration-Driven Deep Models [33.935755695805724]
Learn-by-Calibrating (LbC) is a novel deep learning approach for designing emulators in scientific applications.
We show that LbC provides significant improvements in generalization error over widely-adopted loss function choices.
LbC achieves high-quality emulators even in small data regimes and more importantly, recovers the inherent noise structure without any explicit priors.
arXiv Detail & Related papers (2020-05-05T16:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.