A multi-scale sampling method for accurate and robust deep neural
network to predict combustion chemical kinetics
- URL: http://arxiv.org/abs/2201.03549v1
- Date: Sun, 9 Jan 2022 12:09:03 GMT
- Title: A multi-scale sampling method for accurate and robust deep neural
network to predict combustion chemical kinetics
- Authors: Tianhan Zhang, Yuxiao Yi, Yifan Xu, Zhi X. Chen, Yaoyu Zhang, Weinan
E, Zhi-Qin John Xu
- Abstract summary: This work aims to understand two basic questions regarding the deep neural network (DNN) method.
The current work proposes using Box-Cox transformation (BCT) to preprocess the combustion data.
A three-hidden-layer DNN, based on the multi-scale method without specific flame simulation data, allows predicting chemical kinetics in various scenarios.
- Score: 10.03378516586017
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning has long been considered as a black box for predicting
combustion chemical kinetics due to the extremely large number of parameters
and the lack of evaluation standards and reproducibility. The current work aims
to understand two basic questions regarding the deep neural network (DNN)
method: what data the DNN needs and how general the DNN method can be. Sampling
and preprocessing determine the DNN training dataset, further affect DNN
prediction ability. The current work proposes using Box-Cox transformation
(BCT) to preprocess the combustion data. In addition, this work compares
different sampling methods with or without preprocessing, including the Monte
Carlo method, manifold sampling, generative neural network method (cycle-GAN),
and newly-proposed multi-scale sampling. Our results reveal that the DNN
trained by the manifold data can capture the chemical kinetics in limited
configurations but cannot remain robust toward perturbation, which is
inevitable for the DNN coupled with the flow field. The Monte Carlo and
cycle-GAN samplings can cover a wider phase space but fail to capture
small-scale intermediate species, producing poor prediction results. A
three-hidden-layer DNN, based on the multi-scale method without specific flame
simulation data, allows predicting chemical kinetics in various scenarios and
being stable during the temporal evolutions. This single DNN is readily
implemented with several CFD codes and validated in various combustors,
including (1). zero-dimensional autoignition, (2). one-dimensional freely
propagating flame, (3). two-dimensional jet flame with triple-flame structure,
and (4). three-dimensional turbulent lifted flames. The results demonstrate the
satisfying accuracy and generalization ability of the pre-trained DNN. The
Fortran and Python versions of DNN and example code are attached in the
supplementary for reproducibility.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Graph Neural Networks for Temperature-Dependent Activity Coefficient
Prediction of Solutes in Ionic Liquids [58.720142291102135]
We present a GNN to predict temperature-dependent infinite dilution ACs of solutes in ILs.
We train the GNN on a database including more than 40,000 AC values and compare it to a state-of-the-art MCM.
The GNN and MCM achieve similar high prediction performance, with the GNN additionally enabling high-quality predictions for ACs of solutions that contain ILs and solutes not considered during training.
arXiv Detail & Related papers (2022-06-23T15:27:29Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Estimating permeability of 3D micro-CT images by physics-informed CNNs
based on DNS [1.6274397329511197]
This paper presents a novel methodology for permeability prediction from micro-CT scans of geological rock samples.
The training data set for CNNs dedicated to permeability prediction consists of permeability labels that are typically generated by classical lattice Boltzmann methods (LBM)
We instead perform direct numerical simulation (DNS) by solving the stationary Stokes equation in an efficient and distributed-parallel manner.
arXiv Detail & Related papers (2021-09-04T08:43:19Z) - Finite volume method network for acceleration of unsteady computational
fluid dynamics: non-reacting and reacting flows [0.0]
A neural network model with a unique network architecture and physics-informed loss function was developed to accelerate CFD simulations.
Under the reacting flow dataset, the computational speed of this network model was measured to be about 10 times faster than that of the CFD solver.
arXiv Detail & Related papers (2021-05-07T15:33:49Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Transfer Learning with Convolutional Networks for Atmospheric Parameter
Retrieval [14.131127382785973]
The Infrared Atmospheric Sounding Interferometer (IASI) on board the MetOp satellite series provides important measurements for Numerical Weather Prediction (NWP)
Retrieving accurate atmospheric parameters from the raw data provided by IASI is a large challenge, but necessary in order to use the data in NWP models.
We show how features extracted from the IASI data by a CNN trained to predict a physical variable can be used as inputs to another statistical method designed to predict a different physical variable at low altitude.
arXiv Detail & Related papers (2020-12-09T09:28:42Z) - Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural
Networks [0.0]
We propose a new pruning method called Pruning for Quantization (PfQ) which removes the filters that disturb the fine-tuning of the DNN.
Experiments using well-known models and datasets confirmed that the proposed method achieves higher performance with a similar model size.
arXiv Detail & Related papers (2020-11-13T04:12:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.